[
  {
    "path": ".gitignore",
    "content": "HELP.md\ntarget/\n!.mvn/wrapper/maven-wrapper.jar\n!**/src/main/**\n#**/src/test/**\n.idea/\n*.iml\n*.DS_Store\n\n### IntelliJ IDEA ###\n.idea\n*.iws\n*.ipr\n\n"
  },
  {
    "path": "README.md",
    "content": "# 1.友情提示\n\n> 1. 联系我：如要有问题咨询，请联系我（公众号：[`大数据羊说`](#32公众号)，备注来自`GitHub`）\n> 2. 该仓库会持续更新 flink 教程福利干货，麻烦路过的各位亲给这个项目点个 `star`，太不易了，写了这么多，算是对我坚持下来的一种鼓励吧！\n\n![在这里插入图片描述](https://raw.githubusercontent.com/yangyichao-mango/yangyichao-mango.github.io/master/1631459281928.png)\n\n![Stargazers over time](https://starchart.cc/yangyichao-mango/flink-study.svg)\n\n<br>\n<p align=\"center\">\n    <a href=\"#32公众号\" style=\"text-decoration:none;\">\n        <img src=\"https://img.shields.io/badge/WeChat-%E5%85%AC%E4%BC%97%E5%8F%B7-green\" alt=\"公众号\" />\n    </a>\n    <a href=\"https://www.zhihu.com/people/onemango\" target=\"_blank\" style=\"text-decoration:none;\">\n        <img src=\"https://img.shields.io/badge/zhihu-%E7%9F%A5%E4%B9%8E-blue\" alt=\"知乎\" />\n    </a>\n    <a href=\"https://juejin.cn/user/562562548382926\" target=\"_blank\" style=\"text-decoration:none;\">\n        <img src=\"https://img.shields.io/badge/juejin-%E6%8E%98%E9%87%91-blue\" alt=\"掘金\" />\n    </a>\n    <a href=\"https://blog.csdn.net/qq_34608620?spm=1001.2014.3001.5343&type=blog\" target=\"_blank\" style=\"text-decoration:none;\">\n        <img src=\"https://img.shields.io/badge/csdn-CSDN-red\" alt=\"CSDN\" />\n    </a>\n    <a href=\"https://home.51cto.com/space?uid=15322900\" target=\"_blank\" style=\"text-decoration:none;\">\n        <img src=\"https://img.shields.io/badge/51cto-51CT0%E5%8D%9A%E5%AE%A2-orange\" alt=\"51CT0博客\" />\n        </a>\n    <img src=\"https://img.shields.io/github/stars/yangyichao-mango/flink-study\" alt=\"投稿\">           \n</p>\n\n# 2.文章目录\n\n> 以下列出的是作者对原创的一些文章和一些学习资源做了一个汇总，会持续更新！如果帮到了您，请点个star支持一下，谢谢！\n\n## 2.1.flink sql\n\n1. [公众号文章：踩坑记 | flink sql count 还有这种坑！](https://mp.weixin.qq.com/s/5XDkmuEIfHB_WsMHPeinkw)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror)\n2. [公众号文章：实战 | flink sql 与微博热搜的碰撞！！！](https://mp.weixin.qq.com/s/GHLoWMBZxajA2nXPHhH8WA)\n3. [公众号文章：flink sql 知其所以然（一）| source\\sink 原理](https://mp.weixin.qq.com/s/xIXh8B_suAlKSp56aO5aEg)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink)\n4. [公众号文章：flink sql 知其所以然（二）| 自定义 redis 数据维表（附源码）](https://mp.weixin.qq.com/s/b_zV_tGp5QJQjgnSaxNT_Q)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink)\n5. [公众号文章：flink sql 知其所以然（三）| 自定义 redis 数据汇表（附源码）](https://mp.weixin.qq.com/s/7Fwey_AXNJ0jQZWfXvtNmw)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink)\n6. [公众号文章：flink sql 知其所以然（四）| sql api 类型系统](https://mp.weixin.qq.com/s/aqDRWgr3Kim7lblx10JvtA)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_04/type)\n7. [公众号文章：flink sql 知其所以然（五）| 自定义 protobuf format](https://mp.weixin.qq.com/s/STUC4trW-HA3cnrsqT-N6g)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats)\n8. [公众号文章：flink sql 知其所以然（六）| flink sql 约会 calcite（看这篇就够了）](https://mp.weixin.qq.com/s/SxRKp368mYSKVmuduPoXFg)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite)\n9. [公众号文章：flink sql 知其所以然（七）：不会连最适合 flink sql 的 ETL 和 group agg 场景都没见过吧？](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query)\n10. [公众号文章：flink sql 知其所以然（八）：flink sql tumble window 的奇妙解析之路](https://mp.weixin.qq.com/s/IRmt8dWmxAmbBh696akHdw)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window)\n11. [公众号文章：flink sql 知其所以然（九）：window tvf tumble window 的奇思妙解](https://mp.weixin.qq.com/s/QVuu5_N4lHo5gXlt1tdncw)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window)\n12. [公众号文章：flink sql 知其所以然（十）：大家都用 cumulate window 啦](https://mp.weixin.qq.com/s/IqAzjrQmcGmnxvHm1FAV5g)，[源码](https://github.com/yangyichao-mango/flink-study/blob/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/CumulateWindowTest.java)\n13. [公众号文章：flink sql 知其所以然（十一）：去重不仅仅有 count distinct 还有强大的 deduplication](https://mp.weixin.qq.com/s/VL6egD76B4J7IcpHShTq7Q)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number)\n14. [公众号文章：flink sql 知其所以然（十二）：流 join 很难嘛？？？（上）](https://mp.weixin.qq.com/s/Z8QfKfhrX5KEnR-s7gRtsA)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins)\n15. [公众号文章：flink sql 知其所以然（十三）：流 join 很难嘛？？？（下）]()，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins)\n16. [公众号文章：flink sql 知其所以然（十四）：维表 join 的性能优化之路（上）附源码]()，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis)\n17. [公众号文章：flink sql 知其所以然（十五）：改了改源码，实现了个 batch lookup join（附源码）]()，[源码](https://github.com/yangyichao-mango/flink-study/blob/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/RedisBatchLookupTest2.java)\n18. [公众号文章：flink sql 知其所以然（十八）：在 flink 中怎么使用 hive udf？附源码]()，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf)\n19. [公众号文章：flink sql 知其所以然（十九）：Table 与 DataStream 的转转转（附源码）]()，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_08_datastream_trans)\n20. [公众号文章：（上）史上最全干货！Flink SQL 成神之路（全文 18 万字、138 个案例、42 张图）]()，[源码](https://github.com/yangyichao-mango/flink-study/blob/main/flink-examples-1.13/src/main/java/flink/examples/sql)\n21. [公众号文章：（中）史上最全干货！Flink SQL 成神之路（全文 18 万字、138 个案例、42 张图）]()，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql)\n22. [公众号文章：（下）史上最全干货！Flink SQL 成神之路（全文 18 万字、138 个案例、42 张图）]()，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/sql)\n\n## 2.2.flink 实战\n\n1. [公众号文章：揭秘字节跳动埋点数据实时动态处理引擎（附源码）](https://mp.weixin.qq.com/s/PoK0XOA9OHIDJezb1fLOMw)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split)\n2. [公众号文章：踩坑记| flink state 序列化 java enum 竟然岔劈了](https://mp.weixin.qq.com/s/YElwTL-wzo2UVVIsIH_9YA)，[源码](https://github.com/yangyichao-mango/flink-study/tree/main/flink-examples-1.13/src/main/java/flink/examples/datastream/_03/enums_state)\n3. [公众号文章：flink idea 本地调试状态恢复](https://mp.weixin.qq.com/s/rLeKY_49q8rR9C_RmlTmhg)，[源码](https://github.com/yangyichao-mango/flink-study/blob/main/flink-examples-1.13/src/main/java/flink/examples/runtime/_04/statebackend/CancelAndRestoreWithCheckpointTest.java)\n\n# 3.联系我\n\n## 3.1.微信\n\n有任何学习上的疑惑都欢迎添加作者的微信，一起学习，一起交流！\n\n![在这里插入图片描述](https://raw.githubusercontent.com/yangyichao-mango/yangyichao-mango.github.io/master/1.png)\n\n## 3.2.公众号\n\n如果大家想要实时关注我更新的文章以及分享的干货的话，可以关注我的公众号：**大数据羊说**\n\n![在这里插入图片描述](https://raw.githubusercontent.com/yangyichao-mango/yangyichao-mango.github.io/master/2.png)\n"
  },
  {
    "path": "flink-examples-1.10/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <parent>\n        <artifactId>flink-study</artifactId>\n        <groupId>com.github.antigeneral</groupId>\n        <version>1.0-SNAPSHOT</version>\n    </parent>\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>com.github.antigeneral</groupId>\n    <artifactId>flink-examples-1.10</artifactId>\n\n\n    <build>\n\n        <extensions>\n            <extension>\n                <groupId>kr.motd.maven</groupId>\n                <artifactId>os-maven-plugin</artifactId>\n                <version>${os-maven-plugin.version}</version>\n            </extension>\n        </extensions>\n\n        <plugins>\n\n\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-compiler-plugin</artifactId>\n                <configuration>\n                    <source>8</source>\n                    <target>8</target>\n                </configuration>\n            </plugin>\n\n            <plugin>\n                <groupId>org.xolstice.maven.plugins</groupId>\n                <artifactId>protobuf-maven-plugin</artifactId>\n                <version>${protobuf-maven-plugin.version}</version>\n                <configuration>\n                    <protoSourceRoot>\n                        src/test/proto\n                    </protoSourceRoot>\n                    <protocArtifact>\n                        com.google.protobuf:protoc:3.1.0:exe:${os.detected.classifier}\n                    </protocArtifact>\n                    <pluginId>grpc-java</pluginId>\n                    <pluginArtifact>\n                        io.grpc:protoc-gen-grpc-java:${grpc-plugin.version}:exe:${os.detected.classifier}\n                    </pluginArtifact>\n                </configuration>\n                <executions>\n                    <execution>\n                        <goals>\n                            <goal>compile</goal>\n                            <goal>compile-custom</goal>\n                        </goals>\n                    </execution>\n                </executions>\n            </plugin>\n        </plugins>\n    </build>\n\n    <properties>\n        <flink.version>1.10.1</flink.version>\n        <lombok.version>1.18.20</lombok.version>\n        <scala.binary.version>2.11</scala.binary.version>\n        <mvel2.version>2.4.12.Final</mvel2.version>\n        <curator.version>2.12.0</curator.version>\n        <kafka.version>2.1.1</kafka.version>\n        <groovy.version>2.5.7</groovy.version>\n        <gson.version>2.2.4</gson.version>\n        <guava.version>30.1.1-jre</guava.version>\n        <guava.retrying.version>2.0.0</guava.retrying.version>\n        <logback-classic.version>1.2.3</logback-classic.version>\n        <slf4j-log4j12.version>1.8.0-beta2</slf4j-log4j12.version>\n\n        <grpc-plugin.version>1.23.1</grpc-plugin.version>\n        <protobuf-maven-plugin.version>0.6.1</protobuf-maven-plugin.version>\n        <protobuf-java.version>3.11.0</protobuf-java.version>\n\n        <joda-time.version>2.5</joda-time.version>\n\n        <os-maven-plugin.version>1.6.2</os-maven-plugin.version>\n    </properties>\n\n<!--    <dependencies>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.httpcomponents</groupId>-->\n<!--            <artifactId>httpclient</artifactId>-->\n<!--            <version>4.5.10</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>joda-time</groupId>-->\n<!--            <artifactId>joda-time</artifactId>-->\n<!--            &lt;!&ndash; managed version &ndash;&gt;-->\n<!--            <scope>provided</scope>-->\n<!--            &lt;!&ndash; Avro records can contain JodaTime fields when using logical fields.-->\n<!--                In order to handle them, we need to add an optional dependency.-->\n<!--                Users with those Avro records need to add this dependency themselves. &ndash;&gt;-->\n<!--            <optional>true</optional>-->\n<!--            <version>${joda-time.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.google.protobuf</groupId>-->\n<!--            <artifactId>protobuf-java</artifactId>-->\n<!--            <version>${protobuf-java.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.github.rholder/guava-retrying &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.github.rholder</groupId>-->\n<!--            <artifactId>guava-retrying</artifactId>-->\n<!--            <version>${guava.retrying.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.google.guava</groupId>-->\n<!--            <artifactId>guava</artifactId>-->\n<!--            <version>${guava.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.projectlombok</groupId>-->\n<!--            <artifactId>lombok</artifactId>-->\n<!--            <version>${lombok.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-java</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-streaming-java_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-clients_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.mvel/mvel2 &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.mvel</groupId>-->\n<!--            <artifactId>mvel2</artifactId>-->\n<!--            <version>${mvel2.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/redis.clients/jedis &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>redis.clients</groupId>-->\n<!--            <artifactId>jedis</artifactId>-->\n<!--            <version>3.6.3</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; 对zookeeper的底层api的一些封装 &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.curator</groupId>-->\n<!--            <artifactId>curator-framework</artifactId>-->\n<!--            <version>${curator.version}</version>-->\n<!--        </dependency>-->\n<!--        &lt;!&ndash; 封装了一些高级特性，如：Cache事件监听、选举、分布式锁、分布式Barrier &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.curator</groupId>-->\n<!--            <artifactId>curator-recipes</artifactId>-->\n<!--            <version>${curator.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.kafka</groupId>-->\n<!--            <artifactId>kafka-clients</artifactId>-->\n<!--            <version>${kafka.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-ant</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-cli-commons</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-cli-picocli</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-console</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-datetime</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-docgenerator</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-groovydoc</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-groovysh</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-jmx</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-json</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-jsr223</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-macro</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-nio</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-servlet</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-sql</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-swing</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-templates</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-test</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-test-junit5</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-testng</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-xml</artifactId>-->\n<!--            <version>${groovy.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-planner_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.google.code.gson</groupId>-->\n<!--            <artifactId>gson</artifactId>-->\n<!--            <version>${gson.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-common</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-api-java</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-api-java-bridge_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-planner-blink_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-jdbc &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-json</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.bahir</groupId>-->\n<!--            <artifactId>flink-connector-redis_2.10</artifactId>-->\n<!--            <version>1.0</version>-->\n<!--        </dependency>-->\n\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-connector-kafka_2.12</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n\n<!--        <dependency>-->\n<!--            <groupId>ch.qos.logback</groupId>-->\n<!--            <artifactId>logback-classic</artifactId>-->\n<!--            <scope>compile</scope>-->\n<!--            <version>${logback-classic.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.slf4j</groupId>-->\n<!--            <artifactId>slf4j-log4j12</artifactId>-->\n<!--            <version>${slf4j-log4j12.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n\n<!--            <groupId>org.apache.flink</groupId>-->\n\n<!--            <artifactId>flink-runtime-web_2.11</artifactId>-->\n\n<!--            <version>${flink.version}</version>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.core</groupId>-->\n<!--            <artifactId>jackson-databind</artifactId>-->\n<!--            <version>2.12.4</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-kotlin &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.module</groupId>-->\n<!--            <artifactId>jackson-module-kotlin</artifactId>-->\n<!--            <version>2.12.4</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-parameter-names &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.module</groupId>-->\n<!--            <artifactId>jackson-module-parameter-names</artifactId>-->\n<!--            <version>2.12.4</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-guava &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.datatype</groupId>-->\n<!--            <artifactId>jackson-datatype-guava</artifactId>-->\n<!--            <version>2.12.4</version>-->\n<!--        </dependency>-->\n\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.hubspot.jackson/jackson-datatype-protobuf &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.hubspot.jackson</groupId>-->\n<!--            <artifactId>jackson-datatype-protobuf</artifactId>-->\n<!--            <version>0.9.12</version>-->\n<!--        </dependency>-->\n\n\n<!--    </dependencies>-->\n\n\n</project>"
  },
  {
    "path": "flink-examples-1.10/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_01_outer_join/WindowJoinFunction$46.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._01_outer_join;\n\n\npublic class WindowJoinFunction$46\n        extends org.apache.flink.api.common.functions.RichFlatJoinFunction {\n\n    final org.apache.flink.table.dataformat.JoinedRow joinedRow = new org.apache.flink.table.dataformat.JoinedRow();\n\n    public WindowJoinFunction$46(Object[] references) throws Exception {\n\n    }\n\n\n    @Override\n    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n\n    }\n\n    @Override\n    public void join(Object _in1, Object _in2, org.apache.flink.util.Collector c) throws Exception {\n        org.apache.flink.table.dataformat.BaseRow in1 = (org.apache.flink.table.dataformat.BaseRow) _in1;\n        org.apache.flink.table.dataformat.BaseRow in2 = (org.apache.flink.table.dataformat.BaseRow) _in2;\n\n        int result$40;\n        boolean isNull$40;\n        int field$41;\n        boolean isNull$41;\n        int result$42;\n        boolean isNull$42;\n        int field$43;\n        boolean isNull$43;\n        boolean isNull$44;\n        boolean result$45;\n        result$40 = -1;\n        isNull$40 = true;\n        if (in1 != null) {\n            isNull$41 = in1.isNullAt(0);\n            field$41 = -1;\n            if (!isNull$41) {\n                field$41 = in1.getInt(0);\n            }\n            result$40 = field$41;\n            isNull$40 = isNull$41;\n        }\n        result$42 = -1;\n        isNull$42 = true;\n        if (in2 != null) {\n            isNull$43 = in2.isNullAt(0);\n            field$43 = -1;\n            if (!isNull$43) {\n                field$43 = in2.getInt(0);\n            }\n            result$42 = field$43;\n            isNull$42 = isNull$43;\n        }\n\n\n        isNull$44 = isNull$40 || isNull$42;\n        result$45 = false;\n        if (!isNull$44) {\n\n            result$45 = result$40 == result$42;\n\n        }\n\n        if (result$45) {\n\n            joinedRow.replace(in1, in2);\n            c.collect(joinedRow);\n        }\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.10/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_01_outer_join/_06_Interval_Outer_Joins_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._01_outer_join;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class _06_Interval_Outer_Joins_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);\n\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.10.1 Interval Join 事件时间案例\");\n\n        DataStream<Row> sourceTable = env.addSource(new UserDefinedSource1())\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Row>(Time.minutes(0L)) {\n                    @Override\n                    public long extractTimestamp(Row row) {\n                        return (long) row.getField(2);\n                    }\n                });\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable, \"user_id, name, timestamp, rowtime.rowtime\");\n\n        DataStream<Row> dimTable = env.addSource(new UserDefinedSource2())\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Row>(Time.minutes(0L)) {\n                    @Override\n                    public long extractTimestamp(Row row) {\n                        return (long) row.getField(2);\n                    }\n                });\n\n        tEnv.createTemporaryView(\"dim_table\", dimTable, \"user_id, platform, timestamp, rowtime.rowtime\");\n\n        String sql = \"SELECT\\n\"\n                + \"    s.user_id as user_id,\\n\"\n                + \"    s.name as name,\\n\"\n                + \"    d.platform as platform\\n\"\n                + \"FROM source_table as s\\n\"\n                + \"FULL JOIN dim_table as d ON s.user_id = d.user_id\\n\"\n                + \"AND s.rowtime BETWEEN d.rowtime AND d.rowtime + INTERVAL '30' SECOND\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.RowTimeBoundedStreamJoin}\n          */\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toAppendStream(result, Row.class)\n                .print();\n\n        env.execute(\"1.10.1 Interval Full Join 事件时间案例\");\n\n    }\n\n    private static class UserDefinedSource1 implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                Row row = new Row(3);\n\n                row.setField(0, i);\n\n                row.setField(1, \"name\");\n\n                long timestamp = System.currentTimeMillis();\n\n                row.setField(2, timestamp);\n\n                sourceContext.collect(row);\n\n                Thread.sleep(1000L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.LONG_TYPE_INFO);\n        }\n    }\n\n    private static class UserDefinedSource2 implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 10;\n\n            while (!this.isCancel) {\n\n                Row row = new Row(3);\n\n                row.setField(0, i);\n\n                row.setField(1, \"platform\");\n\n                long timestamp = System.currentTimeMillis();\n\n                row.setField(2, timestamp);\n\n                sourceContext.collect(row);\n\n                Thread.sleep(1000L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.LONG_TYPE_INFO);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.12/.gitignore",
    "content": "HELP.md\ntarget/\n!.mvn/wrapper/maven-wrapper.jar\n!**/src/main/**\n#**/src/test/**\n.idea/\n*.iml\n*.DS_Store\n\n### IntelliJ IDEA ###\n.idea\n*.iws\n*.ipr\n\n"
  },
  {
    "path": "flink-examples-1.12/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <parent>\n        <artifactId>flink-study</artifactId>\n        <groupId>com.github.antigeneral</groupId>\n        <version>1.0-SNAPSHOT</version>\n    </parent>\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>com.github.antigeneral</groupId>\n    <artifactId>flink-examples-1.12</artifactId>\n\n    <!--    <build>-->\n\n    <!--        <extensions>-->\n    <!--            <extension>-->\n    <!--                <groupId>kr.motd.maven</groupId>-->\n    <!--                <artifactId>os-maven-plugin</artifactId>-->\n    <!--                <version>${os-maven-plugin.version}</version>-->\n    <!--            </extension>-->\n    <!--        </extensions>-->\n\n    <!--        <plugins>-->\n\n\n\n    <!--            <plugin>-->\n    <!--                <groupId>org.apache.maven.plugins</groupId>-->\n    <!--                <artifactId>maven-compiler-plugin</artifactId>-->\n    <!--                <configuration>-->\n    <!--                    <source>8</source>-->\n    <!--                    <target>8</target>-->\n    <!--                </configuration>-->\n    <!--            </plugin>-->\n\n    <!--            <plugin>-->\n    <!--                <groupId>org.xolstice.maven.plugins</groupId>-->\n    <!--                <artifactId>protobuf-maven-plugin</artifactId>-->\n    <!--                <version>${protobuf-maven-plugin.version}</version>-->\n    <!--                <configuration>-->\n    <!--                    <protoSourceRoot>-->\n    <!--                        src/test/proto-->\n    <!--                    </protoSourceRoot>-->\n    <!--                    <protocArtifact>-->\n    <!--                        com.google.protobuf:protoc:3.1.0:exe:${os.detected.classifier}-->\n    <!--                    </protocArtifact>-->\n    <!--                    <pluginId>grpc-java</pluginId>-->\n    <!--                    <pluginArtifact>-->\n    <!--                        io.grpc:protoc-gen-grpc-java:${grpc-plugin.version}:exe:${os.detected.classifier}-->\n    <!--                    </pluginArtifact>-->\n    <!--                </configuration>-->\n    <!--                <executions>-->\n    <!--                    <execution>-->\n    <!--                        <goals>-->\n    <!--                            <goal>compile</goal>-->\n    <!--                            <goal>compile-custom</goal>-->\n    <!--                        </goals>-->\n    <!--                    </execution>-->\n    <!--                </executions>-->\n    <!--            </plugin>-->\n    <!--        </plugins>-->\n    <!--    </build>-->\n\n    <!--    <properties>-->\n    <!--        <flink.version>1.12.1</flink.version>-->\n    <!--        <lombok.version>1.18.20</lombok.version>-->\n    <!--        <scala.binary.version>2.11</scala.binary.version>-->\n    <!--        <mvel2.version>2.4.12.Final</mvel2.version>-->\n    <!--        <curator.version>2.12.0</curator.version>-->\n    <!--        <kafka.version>2.1.1</kafka.version>-->\n    <!--        <groovy.version>2.5.7</groovy.version>-->\n    <!--        <gson.version>2.2.4</gson.version>-->\n    <!--        <guava.version>30.1.1-jre</guava.version>-->\n    <!--        <guava.retrying.version>2.0.0</guava.retrying.version>-->\n    <!--        <logback-classic.version>1.2.3</logback-classic.version>-->\n    <!--        <slf4j-log4j12.version>1.8.0-beta2</slf4j-log4j12.version>-->\n\n    <!--        <grpc-plugin.version>1.23.1</grpc-plugin.version>-->\n    <!--        <protobuf-maven-plugin.version>0.6.1</protobuf-maven-plugin.version>-->\n    <!--        <protobuf-java.version>3.11.0</protobuf-java.version>-->\n\n    <!--        <joda-time.version>2.5</joda-time.version>-->\n\n    <!--        <os-maven-plugin.version>1.6.2</os-maven-plugin.version>-->\n    <!--    </properties>-->\n\n    <!--    <dependencies>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.httpcomponents</groupId>-->\n    <!--            <artifactId>httpclient</artifactId>-->\n    <!--            <version>4.5.10</version>-->\n    <!--            <scope>compile</scope>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>joda-time</groupId>-->\n    <!--            <artifactId>joda-time</artifactId>-->\n    <!--            &lt;!&ndash; managed version &ndash;&gt;-->\n    <!--            <scope>provided</scope>-->\n    <!--            &lt;!&ndash; Avro records can contain JodaTime fields when using logical fields.-->\n    <!--                In order to handle them, we need to add an optional dependency.-->\n    <!--                Users with those Avro records need to add this dependency themselves. &ndash;&gt;-->\n    <!--            <optional>true</optional>-->\n    <!--            <version>${joda-time.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>com.google.protobuf</groupId>-->\n    <!--            <artifactId>protobuf-java</artifactId>-->\n    <!--            <version>${protobuf-java.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.github.rholder/guava-retrying &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>com.github.rholder</groupId>-->\n    <!--            <artifactId>guava-retrying</artifactId>-->\n    <!--            <version>${guava.retrying.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>com.google.guava</groupId>-->\n    <!--            <artifactId>guava</artifactId>-->\n    <!--            <version>${guava.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.projectlombok</groupId>-->\n    <!--            <artifactId>lombok</artifactId>-->\n    <!--            <version>${lombok.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-java</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-streaming-java_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-clients_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.mvel/mvel2 &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.mvel</groupId>-->\n    <!--            <artifactId>mvel2</artifactId>-->\n    <!--            <version>${mvel2.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/redis.clients/jedis &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>redis.clients</groupId>-->\n    <!--            <artifactId>jedis</artifactId>-->\n    <!--            <version>3.6.3</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; 对zookeeper的底层api的一些封装 &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.curator</groupId>-->\n    <!--            <artifactId>curator-framework</artifactId>-->\n    <!--            <version>${curator.version}</version>-->\n    <!--        </dependency>-->\n    <!--        &lt;!&ndash; 封装了一些高级特性，如：Cache事件监听、选举、分布式锁、分布式Barrier &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.curator</groupId>-->\n    <!--            <artifactId>curator-recipes</artifactId>-->\n    <!--            <version>${curator.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.kafka</groupId>-->\n    <!--            <artifactId>kafka-clients</artifactId>-->\n    <!--            <version>${kafka.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-ant</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-cli-commons</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-cli-picocli</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-console</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-datetime</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-docgenerator</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-groovydoc</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-groovysh</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-jmx</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-json</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-jsr223</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-macro</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-nio</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-servlet</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-sql</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-swing</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-templates</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-test</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-test-junit5</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-testng</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.codehaus.groovy</groupId>-->\n    <!--            <artifactId>groovy-xml</artifactId>-->\n    <!--            <version>${groovy.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-table-planner_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>com.google.code.gson</groupId>-->\n    <!--            <artifactId>gson</artifactId>-->\n    <!--            <version>${gson.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-table-common</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--            <scope>compile</scope>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-table-api-java</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--            <scope>compile</scope>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-table-api-java-bridge_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--            <scope>compile</scope>-->\n    <!--        </dependency>-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-table-planner-blink_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--            <scope>compile</scope>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-jdbc &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-connector-jdbc_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-connector-hbase-2.2_2.11</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-json</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.bahir</groupId>-->\n    <!--            <artifactId>flink-connector-redis_2.10</artifactId>-->\n    <!--            <version>1.0</version>-->\n    <!--        </dependency>-->\n\n\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.apache.flink</groupId>-->\n    <!--            <artifactId>flink-connector-kafka_2.12</artifactId>-->\n    <!--            <version>${flink.version}</version>-->\n    <!--        </dependency>-->\n\n\n    <!--        <dependency>-->\n    <!--            <groupId>ch.qos.logback</groupId>-->\n    <!--            <artifactId>logback-classic</artifactId>-->\n    <!--            <scope>compile</scope>-->\n    <!--            <version>${logback-classic.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>org.slf4j</groupId>-->\n    <!--            <artifactId>slf4j-log4j12</artifactId>-->\n    <!--            <version>${slf4j-log4j12.version}</version>-->\n    <!--        </dependency>-->\n\n    <!--        <dependency>-->\n\n    <!--            <groupId>org.apache.flink</groupId>-->\n\n    <!--            <artifactId>flink-runtime-web_2.11</artifactId>-->\n\n    <!--            <version>${flink.version}</version>-->\n\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>com.fasterxml.jackson.core</groupId>-->\n    <!--            <artifactId>jackson-databind</artifactId>-->\n    <!--            <version>2.12.4</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-kotlin &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>com.fasterxml.jackson.module</groupId>-->\n    <!--            <artifactId>jackson-module-kotlin</artifactId>-->\n    <!--            <version>2.12.4</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-parameter-names &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>com.fasterxml.jackson.module</groupId>-->\n    <!--            <artifactId>jackson-module-parameter-names</artifactId>-->\n    <!--            <version>2.12.4</version>-->\n    <!--        </dependency>-->\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-guava &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>com.fasterxml.jackson.datatype</groupId>-->\n    <!--            <artifactId>jackson-datatype-guava</artifactId>-->\n    <!--            <version>2.12.4</version>-->\n    <!--        </dependency>-->\n\n\n    <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.hubspot.jackson/jackson-datatype-protobuf &ndash;&gt;-->\n    <!--        <dependency>-->\n    <!--            <groupId>com.hubspot.jackson</groupId>-->\n    <!--            <artifactId>jackson-datatype-protobuf</artifactId>-->\n    <!--            <version>0.9.12</version>-->\n    <!--        </dependency>-->\n\n\n\n    <!--    </dependencies>-->\n\n\n</project>"
  },
  {
    "path": "flink-examples-1.12/src/main/java/flink/examples/datastream/_07/query/_04_window/_04_TumbleWindowTest.java",
    "content": "package flink.examples.datastream._07.query._04_window;\n\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.tuple.Tuple4;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;\nimport org.apache.flink.streaming.api.windowing.time.Time;\n\npublic class _04_TumbleWindowTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);\n\n        env.addSource(new UserDefinedSource())\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Tuple4<String, String, Integer, Long>>(Time.seconds(0)) {\n                    @Override\n                    public long extractTimestamp(Tuple4<String, String, Integer, Long> element) {\n                        return element.f3;\n                    }\n                })\n                .keyBy(new KeySelector<Tuple4<String, String, Integer, Long>, String>() {\n                    @Override\n                    public String getKey(Tuple4<String, String, Integer, Long> row) throws Exception {\n                        return row.f0;\n                    }\n                })\n                .window(TumblingEventTimeWindows.of(Time.seconds(10)))\n                .sum(2)\n                .print();\n\n        env.execute(\"1.12.1 DataStream TUMBLE WINDOW 案例\");\n    }\n\n    private static class UserDefinedSource implements SourceFunction<Tuple4<String, String, Integer, Long>> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Tuple4<String, String, Integer, Long>> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Tuple4.of(\"a\", \"b\", 1, System.currentTimeMillis()));\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n    }\n}"
  },
  {
    "path": "flink-examples-1.12/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_04_TumbleWindowTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class _04_TumbleWindowTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"       sum(bucket_pv) as pv,\\n\"\n                + \"       sum(bucket_sum_price) as sum_price,\\n\"\n                + \"       max(bucket_max_price) as max_price,\\n\"\n                + \"       min(bucket_min_price) as min_price,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_start) as window_start\\n\"\n                + \"from (\\n\"\n                + \"     select dim,\\n\"\n                + \"            count(*) as bucket_pv,\\n\"\n                + \"            sum(price) as bucket_sum_price,\\n\"\n                + \"            max(price) as bucket_max_price,\\n\"\n                + \"            min(price) as bucket_min_price,\\n\"\n                + \"            count(distinct user_id) as bucket_uv,\\n\"\n                + \"            cast(tumble_start(row_time, interval '1' minute) as bigint) * 1000 as window_start\\n\"\n                + \"     from source_table\\n\"\n                + \"     group by\\n\"\n                + \"            mod(user_id, 1024),\\n\"\n                + \"            dim,\\n\"\n                + \"            tumble(row_time, interval '1' minute)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"         window_start\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.12.1 TUMBLE WINDOW 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.12/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_04_TumbleWindowTest_GroupingWindowAggsHandler$59.java",
    "content": "package flink.examples.sql._07.query._04_window_agg;\n\n\npublic final class _04_TumbleWindowTest_GroupingWindowAggsHandler$59 implements\n        org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<org.apache.flink.table.runtime.operators.window.TimeWindow> {\n\n    long agg0_count1;\n    boolean agg0_count1IsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg2_max;\n    boolean agg2_maxIsNull;\n    long agg3_min;\n    boolean agg3_minIsNull;\n    long agg4_count;\n    boolean agg4_countIsNull;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$22;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$23;\n    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n    org.apache.flink.table.data.GenericRowData acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData acc$27 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData aggValue$58 = new org.apache.flink.table.data.GenericRowData(9);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    private org.apache.flink.table.runtime.operators.window.TimeWindow namespace;\n\n    public _04_TumbleWindowTest_GroupingWindowAggsHandler$59(Object[] references) throws Exception {\n        externalSerializer$22 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[0]));\n        externalSerializer$23 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$22, externalSerializer$23);\n        distinctAcc_0_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n\n        distinct_view_0 = distinctAcc_0_dataview;\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        boolean isNull$34;\n        long result$35;\n        long field$36;\n        boolean isNull$36;\n        boolean isNull$37;\n        long result$38;\n        boolean isNull$41;\n        boolean result$42;\n        boolean isNull$46;\n        boolean result$47;\n        long field$51;\n        boolean isNull$51;\n        boolean isNull$53;\n        long result$54;\n        isNull$51 = accInput.isNullAt(4);\n        field$51 = -1L;\n        if (!isNull$51) {\n            field$51 = accInput.getLong(4);\n        }\n        isNull$36 = accInput.isNullAt(3);\n        field$36 = -1L;\n        if (!isNull$36) {\n            field$36 = accInput.getLong(3);\n        }\n\n\n        isNull$34 = agg0_count1IsNull || false;\n        result$35 = -1L;\n        if (!isNull$34) {\n\n            result$35 = (long) (agg0_count1 + ((long) 1L));\n\n        }\n\n        agg0_count1 = result$35;\n        ;\n        agg0_count1IsNull = isNull$34;\n\n\n        long result$40 = -1L;\n        boolean isNull$40;\n        if (isNull$36) {\n\n            isNull$40 = agg1_sumIsNull;\n            if (!isNull$40) {\n                result$40 = agg1_sum;\n            }\n        } else {\n            long result$39 = -1L;\n            boolean isNull$39;\n            if (agg1_sumIsNull) {\n\n                isNull$39 = isNull$36;\n                if (!isNull$39) {\n                    result$39 = field$36;\n                }\n            } else {\n\n\n                isNull$37 = agg1_sumIsNull || isNull$36;\n                result$38 = -1L;\n                if (!isNull$37) {\n\n                    result$38 = (long) (agg1_sum + field$36);\n\n                }\n\n                isNull$39 = isNull$37;\n                if (!isNull$39) {\n                    result$39 = result$38;\n                }\n            }\n            isNull$40 = isNull$39;\n            if (!isNull$40) {\n                result$40 = result$39;\n            }\n        }\n        agg1_sum = result$40;\n        ;\n        agg1_sumIsNull = isNull$40;\n\n\n        long result$45 = -1L;\n        boolean isNull$45;\n        if (isNull$36) {\n\n            isNull$45 = agg2_maxIsNull;\n            if (!isNull$45) {\n                result$45 = agg2_max;\n            }\n        } else {\n            long result$44 = -1L;\n            boolean isNull$44;\n            if (agg2_maxIsNull) {\n\n                isNull$44 = isNull$36;\n                if (!isNull$44) {\n                    result$44 = field$36;\n                }\n            } else {\n                isNull$41 = isNull$36 || agg2_maxIsNull;\n                result$42 = false;\n                if (!isNull$41) {\n\n                    result$42 = field$36 > agg2_max;\n\n                }\n\n                long result$43 = -1L;\n                boolean isNull$43;\n                if (result$42) {\n\n                    isNull$43 = isNull$36;\n                    if (!isNull$43) {\n                        result$43 = field$36;\n                    }\n                } else {\n\n                    isNull$43 = agg2_maxIsNull;\n                    if (!isNull$43) {\n                        result$43 = agg2_max;\n                    }\n                }\n                isNull$44 = isNull$43;\n                if (!isNull$44) {\n                    result$44 = result$43;\n                }\n            }\n            isNull$45 = isNull$44;\n            if (!isNull$45) {\n                result$45 = result$44;\n            }\n        }\n        agg2_max = result$45;\n        ;\n        agg2_maxIsNull = isNull$45;\n\n\n        long result$50 = -1L;\n        boolean isNull$50;\n        if (isNull$36) {\n\n            isNull$50 = agg3_minIsNull;\n            if (!isNull$50) {\n                result$50 = agg3_min;\n            }\n        } else {\n            long result$49 = -1L;\n            boolean isNull$49;\n            if (agg3_minIsNull) {\n\n                isNull$49 = isNull$36;\n                if (!isNull$49) {\n                    result$49 = field$36;\n                }\n            } else {\n                isNull$46 = isNull$36 || agg3_minIsNull;\n                result$47 = false;\n                if (!isNull$46) {\n\n                    result$47 = field$36 < agg3_min;\n\n                }\n\n                long result$48 = -1L;\n                boolean isNull$48;\n                if (result$47) {\n\n                    isNull$48 = isNull$36;\n                    if (!isNull$48) {\n                        result$48 = field$36;\n                    }\n                } else {\n\n                    isNull$48 = agg3_minIsNull;\n                    if (!isNull$48) {\n                        result$48 = agg3_min;\n                    }\n                }\n                isNull$49 = isNull$48;\n                if (!isNull$49) {\n                    result$49 = result$48;\n                }\n            }\n            isNull$50 = isNull$49;\n            if (!isNull$50) {\n                result$50 = result$49;\n            }\n        }\n        agg3_min = result$50;\n        ;\n        agg3_minIsNull = isNull$50;\n\n\n        Long distinctKey$52 = (Long) field$51;\n        if (isNull$51) {\n            distinctKey$52 = null;\n        }\n\n        Long value$56 = (Long) distinct_view_0.get(distinctKey$52);\n        if (value$56 == null) {\n            value$56 = 0L;\n        }\n\n        boolean is_distinct_value_changed_0 = false;\n\n        long existed$57 = ((long) value$56) & (1L << 0);\n        if (existed$57 == 0) {  // not existed\n            value$56 = ((long) value$56) | (1L << 0);\n            is_distinct_value_changed_0 = true;\n\n            long result$55 = -1L;\n            boolean isNull$55;\n            if (isNull$51) {\n\n                isNull$55 = agg4_countIsNull;\n                if (!isNull$55) {\n                    result$55 = agg4_count;\n                }\n            } else {\n\n\n                isNull$53 = agg4_countIsNull || false;\n                result$54 = -1L;\n                if (!isNull$53) {\n\n                    result$54 = (long) (agg4_count + ((long) 1L));\n\n                }\n\n                isNull$55 = isNull$53;\n                if (!isNull$55) {\n                    result$55 = result$54;\n                }\n            }\n            agg4_count = result$55;\n            ;\n            agg4_countIsNull = isNull$55;\n\n        }\n\n        if (is_distinct_value_changed_0) {\n            distinct_view_0.put(distinctKey$52, value$56);\n        }\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(org.apache.flink.table.runtime.operators.window.TimeWindow ns,\n            org.apache.flink.table.data.RowData otherAcc) throws Exception {\n        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n\n        throw new RuntimeException(\"This function not require merge method, but the merge method is called.\");\n\n    }\n\n    @Override\n    public void setAccumulators(org.apache.flink.table.runtime.operators.window.TimeWindow ns,\n            org.apache.flink.table.data.RowData acc)\n            throws Exception {\n        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n\n        long field$28;\n        boolean isNull$28;\n        long field$29;\n        boolean isNull$29;\n        long field$30;\n        boolean isNull$30;\n        long field$31;\n        boolean isNull$31;\n        long field$32;\n        boolean isNull$32;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$33;\n        boolean isNull$33;\n        isNull$32 = acc.isNullAt(4);\n        field$32 = -1L;\n        if (!isNull$32) {\n            field$32 = acc.getLong(4);\n        }\n        isNull$28 = acc.isNullAt(0);\n        field$28 = -1L;\n        if (!isNull$28) {\n            field$28 = acc.getLong(0);\n        }\n        isNull$29 = acc.isNullAt(1);\n        field$29 = -1L;\n        if (!isNull$29) {\n            field$29 = acc.getLong(1);\n        }\n        isNull$31 = acc.isNullAt(3);\n        field$31 = -1L;\n        if (!isNull$31) {\n            field$31 = acc.getLong(3);\n        }\n\n        // when namespace is null, the dataview is used in heap, no key and namespace set\n        if (namespace != null) {\n            distinctAcc_0_dataview.setCurrentNamespace(namespace);\n            distinct_view_0 = distinctAcc_0_dataview;\n        } else {\n            isNull$33 = acc.isNullAt(5);\n            field$33 = null;\n            if (!isNull$33) {\n                field$33 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n            }\n            distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$33.getJavaObject();\n        }\n\n        isNull$30 = acc.isNullAt(2);\n        field$30 = -1L;\n        if (!isNull$30) {\n            field$30 = acc.getLong(2);\n        }\n\n        agg0_count1 = field$28;\n        ;\n        agg0_count1IsNull = isNull$28;\n\n\n        agg1_sum = field$29;\n        ;\n        agg1_sumIsNull = isNull$29;\n\n\n        agg2_max = field$30;\n        ;\n        agg2_maxIsNull = isNull$30;\n\n\n        agg3_min = field$31;\n        ;\n        agg3_minIsNull = isNull$31;\n\n\n        agg4_count = field$32;\n        ;\n        agg4_countIsNull = isNull$32;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$27 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (agg0_count1IsNull) {\n            acc$27.setField(0, null);\n        } else {\n            acc$27.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$27.setField(1, null);\n        } else {\n            acc$27.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            acc$27.setField(2, null);\n        } else {\n            acc$27.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            acc$27.setField(3, null);\n        } else {\n            acc$27.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            acc$27.setField(4, null);\n        } else {\n            acc$27.setField(4, agg4_count);\n        }\n\n\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$26 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n\n        if (false) {\n            acc$27.setField(5, null);\n        } else {\n            acc$27.setField(5, distinct_acc$26);\n        }\n\n\n        return acc$27;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (false) {\n            acc$25.setField(0, null);\n        } else {\n            acc$25.setField(0, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$25.setField(1, null);\n        } else {\n            acc$25.setField(1, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$25.setField(2, null);\n        } else {\n            acc$25.setField(2, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$25.setField(3, null);\n        } else {\n            acc$25.setField(3, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$25.setField(4, null);\n        } else {\n            acc$25.setField(4, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.api.dataview.MapView mapview$24 = new org.apache.flink.table.api.dataview.MapView();\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$24 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$24);\n\n        if (false) {\n            acc$25.setField(5, null);\n        } else {\n            acc$25.setField(5, distinct_acc$24);\n        }\n\n\n        return acc$25;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue(org.apache.flink.table.runtime.operators.window.TimeWindow ns)\n            throws Exception {\n        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n\n\n        aggValue$58 = new org.apache.flink.table.data.GenericRowData(9);\n\n\n        if (agg0_count1IsNull) {\n            aggValue$58.setField(0, null);\n        } else {\n            aggValue$58.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            aggValue$58.setField(1, null);\n        } else {\n            aggValue$58.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            aggValue$58.setField(2, null);\n        } else {\n            aggValue$58.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            aggValue$58.setField(3, null);\n        } else {\n            aggValue$58.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            aggValue$58.setField(4, null);\n        } else {\n            aggValue$58.setField(4, agg4_count);\n        }\n\n\n        if (false) {\n            aggValue$58.setField(5, null);\n        } else {\n            aggValue$58.setField(5, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace.getStart()));\n        }\n\n\n        if (false) {\n            aggValue$58.setField(6, null);\n        } else {\n            aggValue$58.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace.getEnd()));\n        }\n\n\n        if (false) {\n            aggValue$58.setField(7, null);\n        } else {\n            aggValue$58.setField(7, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace.getEnd() - 1));\n        }\n\n\n        if (true) {\n            aggValue$58.setField(8, null);\n        } else {\n            aggValue$58.setField(8, org.apache.flink.table.data.TimestampData.fromEpochMillis(-1L));\n        }\n\n\n        return aggValue$58;\n\n    }\n\n    @Override\n    public void cleanup(org.apache.flink.table.runtime.operators.window.TimeWindow ns) throws Exception {\n        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n\n        distinctAcc_0_dataview.setCurrentNamespace(namespace);\n        distinctAcc_0_dataview.clear();\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.12/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_04_TumbleWindowTest_KeyProjection$69.java",
    "content": "package flink.examples.sql._07.query._04_window_agg;\n\n\npublic final class _04_TumbleWindowTest_KeyProjection$69 implements\n        org.apache.flink.table.runtime.generated.Projection<org.apache.flink.table.data.RowData,\n                org.apache.flink.table.data.binary.BinaryRowData> {\n\n    org.apache.flink.table.data.binary.BinaryRowData out = new org.apache.flink.table.data.binary.BinaryRowData(2);\n    org.apache.flink.table.data.writer.BinaryRowWriter outWriter =\n            new org.apache.flink.table.data.writer.BinaryRowWriter(out);\n\n    public _04_TumbleWindowTest_KeyProjection$69(Object[] references) throws Exception {\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.binary.BinaryRowData apply(org.apache.flink.table.data.RowData in1) {\n        int field$70;\n        boolean isNull$70;\n        org.apache.flink.table.data.binary.BinaryStringData field$71;\n        boolean isNull$71;\n        outWriter.reset();\n        isNull$70 = in1.isNullAt(0);\n        field$70 = -1;\n        if (!isNull$70) {\n            field$70 = in1.getInt(0);\n        }\n        if (isNull$70) {\n            outWriter.setNullAt(0);\n        } else {\n            outWriter.writeInt(0, field$70);\n        }\n\n        isNull$71 = in1.isNullAt(1);\n        field$71 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$71) {\n            field$71 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(1));\n        }\n        if (isNull$71) {\n            outWriter.setNullAt(1);\n        } else {\n            outWriter.writeString(1, field$71);\n        }\n\n        outWriter.complete();\n\n\n        return out;\n    }\n}"
  },
  {
    "path": "flink-examples-1.12/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_04_TumbleWindowTest_WatermarkGenerator$6.java",
    "content": "package flink.examples.sql._07.query._04_window_agg;\n\n\npublic final class _04_TumbleWindowTest_WatermarkGenerator$6\n        extends org.apache.flink.table.runtime.generated.WatermarkGenerator {\n\n\n    public _04_TumbleWindowTest_WatermarkGenerator$6(Object[] references) throws Exception {\n\n    }\n\n    @Override\n    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n\n    }\n\n    @Override\n    public Long currentWatermark(org.apache.flink.table.data.RowData row) throws Exception {\n\n        org.apache.flink.table.data.TimestampData field$7;\n        boolean isNull$7;\n        boolean isNull$8;\n        org.apache.flink.table.data.TimestampData result$9;\n        isNull$7 = row.isNullAt(3);\n        field$7 = null;\n        if (!isNull$7) {\n            field$7 = row.getTimestamp(3, 3);\n        }\n\n\n        isNull$8 = isNull$7 || false;\n        result$9 = null;\n        if (!isNull$8) {\n\n            result$9 = org.apache.flink.table.data.TimestampData\n                    .fromEpochMillis(field$7.getMillisecond() - ((long) 5000L), field$7.getNanoOfMillisecond());\n\n        }\n\n        if (isNull$8) {\n            return null;\n        } else {\n            return result$9.getMillisecond();\n        }\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/.gitignore",
    "content": "HELP.md\ntarget/\n!.mvn/wrapper/maven-wrapper.jar\n!**/src/main/**\n#**/src/test/**\n.idea/\n*.iml\n*.DS_Store\n\n### IntelliJ IDEA ###\n.idea\n*.iws\n*.ipr\n\n"
  },
  {
    "path": "flink-examples-1.13/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <parent>\n        <artifactId>flink-study</artifactId>\n        <groupId>com.github.antigeneral</groupId>\n        <version>1.0-SNAPSHOT</version>\n    </parent>\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>com.github.antigeneral</groupId>\n    <artifactId>flink-examples-1.13</artifactId>\n\n    <build>\n\n        <extensions>\n            <extension>\n                <groupId>kr.motd.maven</groupId>\n                <artifactId>os-maven-plugin</artifactId>\n                <version>${os-maven-plugin.version}</version>\n            </extension>\n        </extensions>\n\n        <plugins>\n\n\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-compiler-plugin</artifactId>\n            </plugin>\n\n            <plugin>\n                <groupId>org.xolstice.maven.plugins</groupId>\n                <artifactId>protobuf-maven-plugin</artifactId>\n            </plugin>\n\n            <!--            <plugin>-->\n            <!--                &lt;!&ndash; Extract parser grammar template from calcite-core.jar and put-->\n            <!--                     it under ${project.build.directory} where all freemarker templates are. &ndash;&gt;-->\n            <!--                <groupId>org.apache.maven.plugins</groupId>-->\n            <!--                <artifactId>maven-dependency-plugin</artifactId>-->\n            <!--                <executions>-->\n            <!--                    <execution>-->\n            <!--                        <id>unpack-parser-template</id>-->\n            <!--                        <phase>initialize</phase>-->\n            <!--                        <goals>-->\n            <!--                            <goal>unpack</goal>-->\n            <!--                        </goals>-->\n            <!--                        <configuration>-->\n            <!--                            <artifactItems>-->\n            <!--                                <artifactItem>-->\n            <!--                                    <groupId>org.apache.calcite</groupId>-->\n            <!--                                    <artifactId>calcite-core</artifactId>-->\n            <!--                                    <type>jar</type>-->\n            <!--                                    <overWrite>true</overWrite>-->\n            <!--                                    <outputDirectory>${project.build.directory}/</outputDirectory>-->\n            <!--                                    <includes>**/Parser.jj</includes>-->\n            <!--                                </artifactItem>-->\n            <!--                            </artifactItems>-->\n            <!--                        </configuration>-->\n            <!--                    </execution>-->\n            <!--                </executions>-->\n            <!--            </plugin>-->\n            <!--            &lt;!&ndash; adding fmpp code gen &ndash;&gt;-->\n            <!--            <plugin>-->\n            <!--                <artifactId>maven-resources-plugin</artifactId>-->\n            <!--            </plugin>-->\n            <!--            <plugin>-->\n            <!--                <groupId>com.googlecode.fmpp-maven-plugin</groupId>-->\n            <!--                <artifactId>fmpp-maven-plugin</artifactId>-->\n            <!--            </plugin>-->\n            <!--            <plugin>-->\n            <!--                &lt;!&ndash; This must be run AFTER the fmpp-maven-plugin &ndash;&gt;-->\n            <!--                <groupId>org.codehaus.mojo</groupId>-->\n            <!--                <artifactId>javacc-maven-plugin</artifactId>-->\n            <!--            </plugin>-->\n            <!--            <plugin>-->\n            <!--                <groupId>org.apache.maven.plugins</groupId>-->\n            <!--                <artifactId>maven-surefire-plugin</artifactId>-->\n            <!--            </plugin>-->\n        </plugins>\n    </build>\n\n\n    <dependencies>\n\n\n\n        <dependency>\n            <groupId>com.google.protobuf</groupId>\n            <artifactId>protobuf-java</artifactId>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-connector-hive_2.11</artifactId>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.hadoop</groupId>\n            <artifactId>hadoop-common</artifactId>\n            <version>3.1.0</version>\n            <scope>compile</scope>\n            <exclusions>\n                <exclusion>\n                    <artifactId>slf4j-log4j12</artifactId>\n                    <groupId>org.slf4j</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>commons-logging</artifactId>\n                    <groupId>commmons-logging</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>servlet-api</artifactId>\n                    <groupId>javax.servlet</groupId>\n                </exclusion>\n            </exclusions>\n            <optional>true</optional>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.hive</groupId>\n            <artifactId>hive-exec</artifactId>\n            <exclusions>\n                <exclusion>\n                    <artifactId>log4j-slf4j-impl</artifactId>\n                    <groupId>org.apache.logging.log4j</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>guava</artifactId>\n                    <groupId>com.google.guava</groupId>\n                </exclusion>\n                <!--                <exclusion>-->\n<!--                    <artifactId>hadoop-common</artifactId>-->\n<!--                    <groupId>org.apache.hadoop</groupId>-->\n<!--                </exclusion>-->\n            </exclusions>\n        </dependency>\n<!--        <dependency>-->\n<!--            <groupId>org.apache.hadoop</groupId>-->\n<!--            <artifactId>hadoop-common</artifactId>-->\n<!--            <version>${hadoop.version}</version>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>slf4j-log4j12</artifactId>-->\n<!--                    <groupId>org.slf4j</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jsr311-api</artifactId>-->\n<!--                    <groupId>javax.ws.rs</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-core</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-server</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-servlet</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-json</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.hadoop</groupId>-->\n<!--            <artifactId>hadoop-client</artifactId>-->\n<!--            <version>${hadoop.version}</version>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>hadoop-common</artifactId>-->\n<!--                    <groupId>org.apache.hadoop</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.hadoop</groupId>-->\n<!--            <artifactId>hadoop-hdfs</artifactId>-->\n<!--            <version>${hadoop.version}</version>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jsr311-api</artifactId>-->\n<!--                    <groupId>javax.ws.rs</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-core</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-server</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n        <dependency>\n            <groupId>org.apache.hadoop</groupId>\n            <artifactId>hadoop-mapreduce-client-core</artifactId>\n            <version>3.1.0</version>\n            <exclusions>\n                <exclusion>\n                    <artifactId>slf4j-log4j12</artifactId>\n                    <groupId>org.slf4j</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>jersey-client</artifactId>\n                    <groupId>com.sun.jersey</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>jersey-server</artifactId>\n                    <groupId>com.sun.jersey</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>jersey-servlet</artifactId>\n                    <groupId>com.sun.jersey</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>jersey-core</artifactId>\n                    <groupId>com.sun.jersey</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>jersey-json</artifactId>\n                    <groupId>com.sun.jersey</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>guava</artifactId>\n                    <groupId>com.google.guava</groupId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/net.mguenther.kafka/kafka-junit -->\n<!--        <dependency>-->\n<!--            <groupId>net.mguenther.kafka</groupId>-->\n<!--            <artifactId>kafka-junit</artifactId>-->\n<!--        </dependency>-->\n\n        <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library -->\n<!--        <dependency>-->\n<!--            <groupId>org.scala-lang</groupId>-->\n<!--            <artifactId>scala-library</artifactId>-->\n<!--        </dependency>-->\n\n        <dependency>\n            <groupId>com.twitter</groupId>\n            <artifactId>chill-protobuf</artifactId>\n            <!-- exclusions for dependency conversion -->\n            <exclusions>\n                <exclusion>\n                    <groupId>com.esotericsoftware.kryo</groupId>\n                    <artifactId>kryo</artifactId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.kafka</groupId>-->\n<!--            <artifactId>kafka_2.13</artifactId>-->\n<!--        </dependency>-->\n\n\n        <dependency>\n            <groupId>junit</groupId>\n            <artifactId>junit</artifactId>\n            <scope>test</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>net.java.dev.javacc</groupId>\n            <artifactId>javacc</artifactId>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.httpcomponents</groupId>\n            <artifactId>httpclient</artifactId>\n            <version>4.5.10</version>\n            <scope>compile</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-statebackend-rocksdb_2.11</artifactId>\n            <version>${flink.version}</version>\n        </dependency>\n\n        <dependency>\n            <groupId>joda-time</groupId>\n            <artifactId>joda-time</artifactId>\n            <!-- managed version -->\n            <scope>provided</scope>\n            <!-- Avro records can contain JodaTime fields when using logical fields.\n                In order to handle them, we need to add an optional dependency.\n                Users with those Avro records need to add this dependency themselves. -->\n            <optional>true</optional>\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/com.github.rholder/guava-retrying -->\n        <dependency>\n            <groupId>com.github.rholder</groupId>\n            <artifactId>guava-retrying</artifactId>\n            <exclusions>\n                <exclusion>\n                    <artifactId>guava</artifactId>\n                    <groupId>com.google.guava</groupId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n\n        <dependency>\n            <groupId>org.projectlombok</groupId>\n            <artifactId>lombok</artifactId>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-java</artifactId>\n            <version>${flink.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-streaming-java_2.11</artifactId>\n            <version>${flink.version}</version>\n            <exclusions>\n                <exclusion>\n                    <artifactId>flink-shaded-zookeeper-3</artifactId>\n                    <groupId>org.apache.flink</groupId>\n                </exclusion>\n                <exclusion>\n                    <artifactId>flink-shaded-guava</artifactId>\n                    <groupId>org.apache.flink</groupId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-clients_2.11</artifactId>\n            <version>${flink.version}</version>\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/org.mvel/mvel2 -->\n        <dependency>\n            <groupId>org.mvel</groupId>\n            <artifactId>mvel2</artifactId>\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->\n        <dependency>\n            <groupId>redis.clients</groupId>\n            <artifactId>jedis</artifactId>\n        </dependency>\n\n        <!-- 对zookeeper的底层api的一些封装 -->\n        <dependency>\n            <groupId>org.apache.curator</groupId>\n            <artifactId>curator-framework</artifactId>\n        </dependency>\n        <!-- 封装了一些高级特性，如：Cache事件监听、选举、分布式锁、分布式Barrier -->\n        <dependency>\n            <groupId>org.apache.curator</groupId>\n            <artifactId>curator-recipes</artifactId>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.kafka</groupId>\n            <artifactId>kafka-clients</artifactId>\n        </dependency>\n\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-ant</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-cli-commons</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-cli-picocli</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-console</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-datetime</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-docgenerator</artifactId>\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-groovydoc</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-groovysh</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-jmx</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-json</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-jsr223</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-macro</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-nio</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-servlet</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-sql</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-swing</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-templates</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-test</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-test-junit5</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-testng</artifactId>\n\n        </dependency>\n        <dependency>\n            <groupId>org.codehaus.groovy</groupId>\n            <artifactId>groovy-xml</artifactId>\n\n        </dependency>\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-planner_2.11</artifactId>-->\n<!--            <version>1.13.5</version>-->\n<!--&lt;!&ndash;            <scope>provided</scope>&ndash;&gt;-->\n<!--        </dependency>-->\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-streaming-scala_2.11</artifactId>\n            <version>${flink.version}</version>\n        </dependency>\n\n        <dependency>\n            <groupId>mysql</groupId>\n            <artifactId>mysql-connector-java</artifactId>\n            <version>${mysql.version}</version>\n        </dependency>\n\n        <dependency>\n            <groupId>com.google.code.gson</groupId>\n            <artifactId>gson</artifactId>\n\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-table-common</artifactId>\n            <version>${flink.version}</version>\n            <scope>compile</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-table-api-java</artifactId>\n            <version>${flink.version}</version>\n            <scope>compile</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-table-api-java-bridge_2.11</artifactId>\n            <version>${flink.version}</version>\n            <scope>compile</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-table-planner-blink_2.11</artifactId>\n            <version>${flink.version}</version>\n            <scope>compile</scope>\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-jdbc -->\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-connector-jdbc_2.11</artifactId>\n            <version>${flink.version}</version>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-connector-hbase-2.2_2.11</artifactId>\n            <version>${flink.version}</version>\n            <exclusions>\n                <exclusion>\n                    <artifactId>hbase-shaded-miscellaneous</artifactId>\n                    <groupId>org.apache.hbase.thirdparty</groupId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-json</artifactId>\n            <version>${flink.version}</version>\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis -->\n        <dependency>\n            <groupId>org.apache.bahir</groupId>\n            <artifactId>flink-connector-redis_2.10</artifactId>\n            <version>1.0</version>\n        </dependency>\n\n\n        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka -->\n        <dependency>\n            <groupId>org.apache.flink</groupId>\n            <artifactId>flink-connector-kafka_2.12</artifactId>\n\n        </dependency>\n\n\n        <dependency>\n            <groupId>ch.qos.logback</groupId>\n            <artifactId>logback-classic</artifactId>\n            <scope>compile</scope>\n\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->\n        <dependency>\n            <groupId>org.slf4j</groupId>\n            <artifactId>slf4j-log4j12</artifactId>\n\n        </dependency>\n\n        <dependency>\n\n            <groupId>org.apache.flink</groupId>\n\n            <artifactId>flink-runtime-web_2.11</artifactId>\n\n            <version>${flink.version}</version>\n\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind -->\n        <dependency>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-databind</artifactId>\n\n        </dependency>\n\n        <dependency>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-core</artifactId>\n\n        </dependency>\n\n        <dependency>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-annotations</artifactId>\n\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-kotlin -->\n        <dependency>\n            <groupId>com.fasterxml.jackson.module</groupId>\n            <artifactId>jackson-module-kotlin</artifactId>\n\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-parameter-names -->\n        <dependency>\n            <groupId>com.fasterxml.jackson.module</groupId>\n            <artifactId>jackson-module-parameter-names</artifactId>\n\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-guava -->\n        <dependency>\n            <groupId>com.fasterxml.jackson.datatype</groupId>\n            <artifactId>jackson-datatype-guava</artifactId>\n            <exclusions>\n                <exclusion>\n                    <artifactId>guava</artifactId>\n                    <groupId>com.google.guava</groupId>\n                </exclusion>\n            </exclusions>\n\n        </dependency>\n\n\n        <!-- https://mvnrepository.com/artifact/com.hubspot.jackson/jackson-datatype-protobuf -->\n        <dependency>\n            <groupId>com.hubspot.jackson</groupId>\n            <artifactId>jackson-datatype-protobuf</artifactId>\n            <exclusions>\n                <exclusion>\n                    <artifactId>guava</artifactId>\n                    <groupId>com.google.guava</groupId>\n                </exclusion>\n            </exclusions>\n\n        </dependency>\n\n        <!-- https://mvnrepository.com/artifact/org.apache.calcite/calcite-core -->\n        <dependency>\n            <groupId>org.apache.calcite</groupId>\n            <artifactId>calcite-core</artifactId>\n            <exclusions>\n                <exclusion>\n                    <artifactId>guava</artifactId>\n                    <groupId>com.google.guava</groupId>\n                </exclusion>\n            </exclusions>\n\n        </dependency>\n\n        <dependency>\n            <groupId>com.google.guava</groupId>\n            <artifactId>guava</artifactId>\n        </dependency>\n\n\n    </dependencies>\n\n\n</project>"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/core/source/JaninoUtils.java",
    "content": "package flink.core.source;\n\nimport org.codehaus.janino.SimpleCompiler;\n\nimport lombok.extern.slf4j.Slf4j;\n\n\n@Slf4j\npublic class JaninoUtils {\n\n    private static final SimpleCompiler COMPILER = new SimpleCompiler();\n\n    static {\n        COMPILER.setParentClassLoader(JaninoUtils.class.getClassLoader());\n    }\n\n    public static <T> Class<T> genClass(String className, String code, Class<T> clazz) throws Exception {\n\n        COMPILER.cook(code);\n\n        System.out.println(\"生成的代码：\\n\" + code);\n\n        return (Class<T>) COMPILER.getClassLoader().loadClass(className);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/core/source/SourceFactory.java",
    "content": "package flink.core.source;\n\nimport java.io.IOException;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.serialization.SerializationSchema;\n\nimport com.google.protobuf.GeneratedMessageV3;\n\nimport flink.examples.datastream._04.keyed_co_process.protobuf.Source;\nimport lombok.SneakyThrows;\n\npublic class SourceFactory {\n\n    public static <Message extends GeneratedMessageV3> SerializationSchema<Message> getProtobufSer(Class<Message> clazz) {\n        return new SerializationSchema<Message>() {\n            @Override\n            public byte[] serialize(Message element) {\n                return element.toByteArray();\n            }\n        };\n    }\n\n    @SneakyThrows\n    public static <Message extends GeneratedMessageV3> DeserializationSchema<Message> getProtobufDerse(Class<Message> clazz) {\n\n        String code = TEMPLATE.replaceAll(\"\\\\$\\\\{ProtobufClassName}\", clazz.getName())\n                .replaceAll(\"\\\\$\\\\{SimpleProtobufName}\", clazz.getSimpleName());\n\n        String className = clazz.getSimpleName() + \"_DeserializationSchema\";\n\n        Class<DeserializationSchema> deClass = JaninoUtils.genClass(className, code, DeserializationSchema.class);\n\n        return deClass.newInstance();\n    }\n\n    private static final String TEMPLATE =\n                        \"public class ${SimpleProtobufName}_DeserializationSchema extends org.apache.flink.api.common\"\n                      + \".serialization.AbstractDeserializationSchema<${ProtobufClassName}> {\\n\"\n                      + \"\\n\"\n                      + \"    public ${SimpleProtobufName}_DeserializationSchema() {\\n\"\n                      + \"        super(${ProtobufClassName}.class);\\n\"\n                      + \"    }\\n\"\n                      + \"\\n\"\n                      + \"    @Override\\n\"\n                      + \"    public ${ProtobufClassName} deserialize(byte[] message) throws java.io.IOException {\\n\"\n                      + \"        return ${ProtobufClassName}.parseFrom(message);\\n\"\n                      + \"    }\\n\"\n                      + \"}\";\n\n    public static void main(String[] args) throws IOException {\n        System.out.println(SourceFactory.class.getName());\n        System.out.println(SourceFactory.class.getCanonicalName());\n        System.out.println(SourceFactory.class.getSimpleName());\n        System.out.println(SourceFactory.class.getTypeName());\n\n        DeserializationSchema<Source> ds = getProtobufDerse(Source.class);\n\n        Source s = Source.newBuilder()\n                .addNames(\"antigeneral\")\n                .build();\n\n        Source s1 = ds.deserialize(s.toByteArray());\n\n        System.out.println();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/FlinkEnvUtils.java",
    "content": "package flink.examples;\n\nimport java.io.IOException;\nimport java.util.Optional;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.contrib.streaming.state.PredefinedOptions;\nimport org.apache.flink.contrib.streaming.state.RocksDBStateBackend;\nimport org.apache.flink.runtime.state.StateBackend;\nimport org.apache.flink.runtime.state.filesystem.FsStateBackend;\nimport org.apache.flink.runtime.state.memory.MemoryStateBackend;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\nimport lombok.Builder;\nimport lombok.Data;\n\npublic class FlinkEnvUtils {\n\n    private static final boolean ENABLE_INCREMENTAL_CHECKPOINT = true;\n    private static final int NUMBER_OF_TRANSFER_THREADS = 3;\n\n    /**\n     * 设置状态后端为 RocksDBStateBackend\n     *\n     * @param env env\n     */\n    public static void setRocksDBStateBackend(StreamExecutionEnvironment env) throws IOException {\n        setCheckpointConfig(env);\n\n        RocksDBStateBackend rocksDBStateBackend = new RocksDBStateBackend(\n                \"file:///Users/flink/checkpoints\", ENABLE_INCREMENTAL_CHECKPOINT);\n        rocksDBStateBackend.setNumberOfTransferThreads(NUMBER_OF_TRANSFER_THREADS);\n        rocksDBStateBackend.setPredefinedOptions(PredefinedOptions.SPINNING_DISK_OPTIMIZED_HIGH_MEM);\n        env.setStateBackend((StateBackend) rocksDBStateBackend);\n    }\n\n\n    /**\n     * 设置状态后端为 FsStateBackend\n     *\n     * @param env env\n     */\n    public static void setFsStateBackend(StreamExecutionEnvironment env) throws IOException {\n        setCheckpointConfig(env);\n        FsStateBackend fsStateBackend = new FsStateBackend(\"file:///Users/flink/checkpoints\");\n        env.setStateBackend((StateBackend) fsStateBackend);\n    }\n\n\n    /**\n     * 设置状态后端为 MemoryStateBackend\n     *\n     * @param env env\n     */\n    public static void setMemoryStateBackend(StreamExecutionEnvironment env) throws IOException {\n        setCheckpointConfig(env);\n        env.setStateBackend((StateBackend) new MemoryStateBackend());\n    }\n\n    /**\n     * Checkpoint 参数相关配置，but 不设置 StateBackend，即：读取 flink-conf.yaml 文件的配置\n     *\n     * @param env env\n     */\n    public static void setCheckpointConfig(StreamExecutionEnvironment env) throws IOException {\n        env.getCheckpointConfig().setCheckpointTimeout(TimeUnit.MINUTES.toMillis(3));\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(180 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n\n        Configuration configuration = new Configuration();\n        configuration.setString(\"state.checkpoints.num-retained\", \"3\");\n\n        env.configure(configuration, Thread.currentThread().getContextClassLoader());\n\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n    }\n\n    public static FlinkEnv getStreamTableEnv(String[] args) throws IOException {\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        Configuration configuration = Configuration.fromMap(parameterTool.toMap());\n\n        configuration.setString(\"rest.flamegraph.enabled\", \"true\");\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(configuration);\n\n        String stateBackend = parameterTool.get(\"state.backend\", \"rocksdb\");\n\n        env.setParallelism(1);\n\n        if (\"rocksdb\".equals(stateBackend)) {\n            setRocksDBStateBackend(env);\n        } else if (\"filesystem\".equals(stateBackend)) {\n            setFsStateBackend(env);\n        } else if (\"jobmanager\".equals(stateBackend)) {\n            setMemoryStateBackend(env);\n        }\n\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().addConfiguration(configuration);\n\n        FlinkEnv flinkEnv = FlinkEnv\n                .builder()\n                .streamExecutionEnvironment(env)\n                .streamTableEnvironment(tEnv)\n                .build();\n\n        initHiveEnv(flinkEnv, parameterTool);\n\n        return flinkEnv;\n    }\n\n    /**\n     * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n     * http://localhost:9870/\n     * http://localhost:8088/cluster\n     *\n     * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n     * hive cli：$HIVE_HOME/bin/hive\n     */\n    private static void initHiveEnv(FlinkEnv flinkEnv, ParameterTool parameterTool) {\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        boolean enableHiveCatalog = parameterTool.getBoolean(\"enable.hive.catalog\", false);\n\n        if (enableHiveCatalog) {\n            HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n\n            Optional.ofNullable(flinkEnv.streamTEnv())\n                    .ifPresent(s -> s.registerCatalog(\"default\", hive));\n\n            Optional.ofNullable(flinkEnv.batchTEnv())\n                    .ifPresent(s -> s.registerCatalog(\"default\", hive));\n\n            // set the HiveCatalog as the current catalog of the session\n\n            Optional.ofNullable(flinkEnv.streamTEnv())\n                    .ifPresent(s -> s.useCatalog(\"default\"));\n\n            Optional.ofNullable(flinkEnv.batchTEnv())\n                    .ifPresent(s -> s.useCatalog(\"default\"));\n        }\n\n        boolean enableHiveDialect = parameterTool.getBoolean(\"enable.hive.dialect\", false);\n\n        if (enableHiveDialect) {\n\n            Optional.ofNullable(flinkEnv.streamTEnv())\n                    .ifPresent(s -> s.getConfig().setSqlDialect(SqlDialect.HIVE));\n\n            Optional.ofNullable(flinkEnv.batchTEnv())\n                    .ifPresent(s -> s.getConfig().setSqlDialect(SqlDialect.HIVE));\n        }\n\n        boolean enableHiveModuleV2 = parameterTool.getBoolean(\"enable.hive.module.v2\", true);\n\n        if (enableHiveModuleV2) {\n            String version = \"3.1.2\";\n\n            HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n            final boolean enableHiveModuleLoadFirst = parameterTool.getBoolean(\"enable.hive.module.load-first\", true);\n\n            Optional.ofNullable(flinkEnv.streamTEnv())\n                    .ifPresent(s -> {\n                        if (enableHiveModuleLoadFirst) {\n                            s.unloadModule(\"core\");\n                            s.loadModule(\"default\", hiveModuleV2);\n                            s.loadModule(\"core\", CoreModule.INSTANCE);\n                        } else {\n                            s.loadModule(\"default\", hiveModuleV2);\n                        }\n                    });\n\n            Optional.ofNullable(flinkEnv.batchTEnv())\n                    .ifPresent(s -> {\n                        if (enableHiveModuleLoadFirst) {\n                            s.unloadModule(\"core\");\n                            s.loadModule(\"default\", hiveModuleV2);\n                            s.loadModule(\"core\", CoreModule.INSTANCE);\n                        } else {\n                            s.loadModule(\"default\", hiveModuleV2);\n                        }\n                    });\n\n            flinkEnv.setHiveModuleV2(hiveModuleV2);\n        }\n    }\n\n\n    public static FlinkEnv getBatchTableEnv(String[] args) throws IOException {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        FlinkEnv flinkEnv = FlinkEnv\n                .builder()\n                .streamExecutionEnvironment(env)\n                .tableEnvironment(tEnv)\n                .build();\n\n\n        initHiveEnv(flinkEnv, parameterTool);\n\n        return flinkEnv;\n    }\n\n    @Builder\n    @Data\n    public static class FlinkEnv {\n        private StreamExecutionEnvironment streamExecutionEnvironment;\n        private StreamTableEnvironment streamTableEnvironment;\n        private TableEnvironment tableEnvironment;\n        private HiveModuleV2 hiveModuleV2;\n\n        public StreamTableEnvironment streamTEnv() {\n            return this.streamTableEnvironment;\n        }\n\n        public TableEnvironment batchTEnv() {\n            return this.tableEnvironment;\n        }\n\n        public StreamExecutionEnvironment env() {\n            return this.streamExecutionEnvironment;\n        }\n\n        public HiveModuleV2 hiveModuleV2() {\n            return this.hiveModuleV2;\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/JacksonUtils.java",
    "content": "package flink.examples;\n\nimport static com.fasterxml.jackson.core.JsonParser.Feature.ALLOW_COMMENTS;\nimport static com.fasterxml.jackson.core.JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS;\nimport static com.fasterxml.jackson.databind.DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES;\n\nimport java.util.List;\nimport java.util.Map;\n\nimport com.fasterxml.jackson.core.JsonProcessingException;\nimport com.fasterxml.jackson.databind.JavaType;\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.hubspot.jackson.datatype.protobuf.ProtobufModule;\n\n\npublic class JacksonUtils {\n\n    private static ObjectMapper mapper = new ObjectMapper();\n\n    static {\n        mapper.registerModule(new ProtobufModule());\n        mapper.disable(FAIL_ON_UNKNOWN_PROPERTIES);\n        mapper.enable(ALLOW_UNQUOTED_CONTROL_CHARS);\n        mapper.enable(ALLOW_COMMENTS);\n    }\n\n    public static String bean2Json(Object data) {\n        try {\n            String result = mapper.writeValueAsString(data);\n            return result;\n        } catch (JsonProcessingException e) {\n            e.printStackTrace();\n        }\n        return null;\n    }\n\n    public static <T> T json2Bean(String jsonData, Class<T> beanType) {\n        try {\n            T result = mapper.readValue(jsonData, beanType);\n            return result;\n        } catch (Exception e) {\n            e.printStackTrace();\n        }\n\n        return null;\n    }\n\n    public static <T> List<T> json2List(String jsonData, Class<T> beanType) {\n        JavaType javaType = mapper.getTypeFactory().constructParametricType(List.class, beanType);\n\n        try {\n            List<T> resultList = mapper.readValue(jsonData, javaType);\n            return resultList;\n        } catch (Exception e) {\n            e.printStackTrace();\n        }\n\n        return null;\n    }\n\n    public static <K, V> Map<K, V> json2Map(String jsonData, Class<K> keyType, Class<V> valueType) {\n        JavaType javaType = mapper.getTypeFactory().constructMapType(Map.class, keyType, valueType);\n\n        try {\n            Map<K, V> resultMap = mapper.readValue(jsonData, javaType);\n            return resultMap;\n        } catch (Exception e) {\n            e.printStackTrace();\n        }\n\n        return null;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/codegen/JaninoUtils.java",
    "content": "package flink.examples.datastream._01.bytedance.split.codegen;\n\nimport org.codehaus.janino.SimpleCompiler;\n\nimport flink.examples.datastream._01.bytedance.split.model.Evaluable;\nimport lombok.extern.slf4j.Slf4j;\n\n\n@Slf4j\npublic class JaninoUtils {\n\n    private static final SimpleCompiler COMPILER = new SimpleCompiler();\n\n    static {\n        COMPILER.setParentClassLoader(JaninoUtils.class.getClassLoader());\n    }\n\n    public static Class<Evaluable> genCodeAndGetClazz(Long id, String topic, String condition) throws Exception {\n\n        String className = \"CodeGen_\" + topic + \"_\" + id;\n\n        String code = \"import org.apache.commons.lang3.ArrayUtils;\\n\"\n                + \"\\n\"\n                + \"public class \" + className + \" implements flink.examples.datastream._01.bytedance.split.model.Evaluable {\\n\"\n                + \"    \\n\"\n                + \"    @Override\\n\"\n                + \"    public boolean eval(flink.examples.datastream._01.bytedance.split.model.ClientLogSource clientLogSource) {\\n\"\n                + \"        \\n\"\n                + \"        return \" + condition + \";\\n\"\n                + \"    }\\n\"\n                + \"}\\n\";\n\n        COMPILER.cook(code);\n\n        System.out.println(\"生成的代码：\\n\" + code);\n\n        return (Class<Evaluable>) COMPILER.getClassLoader().loadClass(className);\n    }\n\n    public static void main(String[] args) throws Exception {\n        Class<Evaluable> c = genCodeAndGetClazz(1L, \"topic\", \"1==1\");\n\n        System.out.println(1);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/codegen/benchmark/Benchmark.java",
    "content": "package flink.examples.datastream._01.bytedance.split.codegen.benchmark;\n\nimport org.codehaus.groovy.control.CompilerConfiguration;\n\nimport flink.examples.datastream._01.bytedance.split.model.ClientLogSource;\nimport flink.examples.datastream._01.bytedance.split.model.DynamicProducerRule;\nimport groovy.lang.GroovyClassLoader;\nimport groovy.lang.GroovyObject;\nimport lombok.extern.slf4j.Slf4j;\n\n\n@Slf4j\npublic class Benchmark {\n\n    private static void benchmarkForJava() {\n        ClientLogSource s = ClientLogSource.builder().id(1).build();\n\n        long start2 = System.currentTimeMillis();\n\n        for (int i = 0; i < 50000000; i++) {\n            boolean b = String.valueOf(s.getId()).equals(\"1\");\n        }\n\n        long end2 = System.currentTimeMillis();\n\n        System.out.println(\"java:\" + (end2 - start2) + \" ms\");\n    }\n\n    public static void benchmarkForGroovyClassLoader() {\n\n        CompilerConfiguration config = new CompilerConfiguration();\n        config.setSourceEncoding(\"UTF-8\");\n        // 设置该GroovyClassLoader的父ClassLoader为当前线程的加载器(默认)\n        GroovyClassLoader groovyClassLoader =\n                new GroovyClassLoader(Thread.currentThread().getContextClassLoader(), config);\n\n        String groovyCode = \"class demo_002 {\\n\"\n                + \"    boolean eval(flink.examples.datastream._01.bytedance.split.model.SourceModel sourceModel) {\\n\"\n                + \"        return String.valueOf(sourceModel.getId()).equals(\\\"1\\\");\\n\"\n                + \"    }\\n\"\n                + \"}\";\n        try {\n            // 获得GroovyShell_2加载后的class\n            Class<?> groovyClass = groovyClassLoader.parseClass(groovyCode);\n            // 获得GroovyShell_2的实例\n            GroovyObject groovyObject = (GroovyObject) groovyClass.newInstance();\n\n            ClientLogSource s = ClientLogSource.builder().id(1).build();\n\n            long start1 = System.currentTimeMillis();\n\n            for (int i = 0; i < 50000000; i++) {\n                Object methodResult = groovyObject.invokeMethod(\"eval\", s);\n            }\n\n            long end1 = System.currentTimeMillis();\n\n            System.out.println(\"groovy:\" + (end1 - start1) + \" ms\");\n        } catch (Exception e) {\n            e.getStackTrace();\n        }\n    }\n\n    public static void benchmarkForJanino() {\n\n        String condition = \"String.valueOf(sourceModel.getId()).equals(\\\"1\\\")\";\n\n        DynamicProducerRule dynamicProducerRule = DynamicProducerRule\n                .builder()\n                .condition(condition)\n                .targetTopic(\"t\")\n                .build();\n\n        dynamicProducerRule.init(1L);\n\n        ClientLogSource s = ClientLogSource.builder().id(1).build();\n\n        long start2 = System.currentTimeMillis();\n\n        for (int i = 0; i < 50000000; i++) {\n            boolean b = dynamicProducerRule.eval(s);\n        }\n\n        long end2 = System.currentTimeMillis();\n\n        System.out.println(\"janino:\" + (end2 - start2) + \" ms\");\n    }\n\n    public static void main(String[] args) throws Exception {\n\n        for (int i = 0; i < 10; i++) {\n            benchmarkForJava();\n\n            // janino\n            benchmarkForJanino();\n\n            // groovy classloader\n            benchmarkForGroovyClassLoader();\n\n            System.out.println();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/job/SplitExampleJob.java",
    "content": "package flink.examples.datastream._01.bytedance.split.job;\n\nimport java.util.Date;\nimport java.util.concurrent.TimeUnit;\nimport java.util.function.BiConsumer;\n\nimport org.apache.commons.lang3.RandomUtils;\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.ProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.datastream._01.bytedance.split.kafka.KafkaProducerCenter;\nimport flink.examples.datastream._01.bytedance.split.model.ClientLogSink;\nimport flink.examples.datastream._01.bytedance.split.model.ClientLogSource;\nimport flink.examples.datastream._01.bytedance.split.model.DynamicProducerRule;\nimport flink.examples.datastream._01.bytedance.split.zkconfigcenter.ZkBasedConfigCenter;\n\n/**\n * zk：https://www.jianshu.com/p/5491d16e6abd\n * kafka：https://www.jianshu.com/p/dd2578d47ff6\n */\npublic class SplitExampleJob {\n\n    public static void main(String[] args) throws Exception {\n\n        ParameterTool parameters = ParameterTool.fromArgs(args);\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        // 其他参数设置\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameters);\n        env.setMaxParallelism(2);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        env.setParallelism(1);\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        env.addSource(new UserDefinedSource())\n                .process(new ProcessFunction<ClientLogSource, ClientLogSink>() {\n\n                    private ZkBasedConfigCenter zkBasedConfigCenter;\n\n                    private KafkaProducerCenter kafkaProducerCenter;\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n                        this.zkBasedConfigCenter = ZkBasedConfigCenter.getInstance();\n                        this.kafkaProducerCenter = KafkaProducerCenter.getInstance();\n\n                    }\n\n                    @Override\n                    public void processElement(ClientLogSource clientLogSource, Context context, Collector<ClientLogSink> collector)\n                            throws Exception {\n\n                        this.zkBasedConfigCenter.getMap().forEach(new BiConsumer<Long, DynamicProducerRule>() {\n                            @Override\n                            public void accept(Long id, DynamicProducerRule dynamicProducerRule) {\n\n                                if (dynamicProducerRule.eval(clientLogSource)) {\n                                    kafkaProducerCenter.send(dynamicProducerRule.getTargetTopic(), clientLogSource.toString());\n                                }\n\n                            }\n                        });\n                    }\n\n                    @Override\n                    public void close() throws Exception {\n                        super.close();\n                        this.zkBasedConfigCenter.close();\n                        this.kafkaProducerCenter.close();\n                    }\n                });\n\n        env.execute();\n    }\n\n    private static class UserDefinedSource implements SourceFunction<ClientLogSource> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<ClientLogSource> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n                sourceContext.collect(\n                        ClientLogSource\n                                .builder()\n                                .id(RandomUtils.nextInt(0, 10))\n                                .price(RandomUtils.nextInt(0, 100))\n                                .timestamp(System.currentTimeMillis())\n                                .date(new Date().toString())\n                                .build()\n                );\n\n                Thread.sleep(1000L);\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/job/start.sh",
    "content": "# 1.kafka 初始化\n\ncd /kafka-bin-目录\n\n# 启动 kafka server\n./kafka-server-start /usr/local/etc/kafka/server.properties &\n\n# 创建 3 个 topic\nkafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic tuzisir\n\nkafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic tuzisir1\n\nkafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic tuzisir2\n\n# 启动一个 console consumer\n\nkafka-console-consumer --bootstrap-server localhost:9092 --topic tuzisir --from-beginning\n\n# 2.zk 初始化\n\ncd /zk-bin-目录\n\nzkServer start\n\nzkCli -server 127.0.0.1:2181\n\n# zkCli 中需要执行的命令\ncreate /kafka-config {\"1\":{\"condition\":\"1==1\",\"targetTopic\":\"tuzisir1\"},\"2\":{\"condition\":\"1!=1\",\"targetTopic\":\"tuzisir2\"}}\n\nget /kafka-config\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/kafka/KafkaProducerCenter.java",
    "content": "package flink.examples.datastream._01.bytedance.split.kafka;\n\nimport java.util.Properties;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.ConcurrentMap;\nimport java.util.function.BiConsumer;\nimport java.util.function.Function;\n\nimport org.apache.kafka.clients.producer.KafkaProducer;\nimport org.apache.kafka.clients.producer.Producer;\nimport org.apache.kafka.clients.producer.ProducerRecord;\nimport org.apache.kafka.clients.producer.RecordMetadata;\n\nimport flink.examples.datastream._01.bytedance.split.zkconfigcenter.ZkBasedConfigCenter;\n\n\npublic class KafkaProducerCenter {\n\n    private final ConcurrentMap<String, Producer<String, String>> producerConcurrentMap\n            = new ConcurrentHashMap<>();\n\n    private KafkaProducerCenter() {\n        ZkBasedConfigCenter.getInstance()\n                .getMap()\n                .values()\n                .forEach(d -> getProducer(d.getTargetTopic()));\n    }\n\n    private static class Factory {\n        private static final KafkaProducerCenter INSTANCE = new KafkaProducerCenter();\n    }\n\n    public static KafkaProducerCenter getInstance() {\n        return Factory.INSTANCE;\n    }\n\n    private Producer<String, String> getProducer(String topicName) {\n\n        Producer<String, String> producer = producerConcurrentMap.get(topicName);\n\n        if (null != producer) {\n            return producer;\n        }\n\n        return producerConcurrentMap.computeIfAbsent(topicName, new Function<String, Producer<String, String>>() {\n            @Override\n            public Producer<String, String> apply(String topicName) {\n                Properties props = new Properties();\n                props.put(\"bootstrap.servers\", \"localhost:9092\");\n                props.put(\"acks\", \"all\");\n                props.put(\"key.serializer\", \"org.apache.kafka.common.serialization.StringSerializer\");\n                props.put(\"value.serializer\", \"org.apache.kafka.common.serialization.StringSerializer\");\n                return new KafkaProducer<>(props);\n            }\n        });\n\n    }\n\n    public void send(String topicName, String message) {\n\n        final ProducerRecord<String, String> record = new ProducerRecord<>(topicName,\n                \"\", message);\n        try {\n            RecordMetadata metadata = getProducer(topicName).send(record).get();\n        } catch (Exception e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n    public void close() {\n        this.producerConcurrentMap.forEach(new BiConsumer<String, Producer<String, String>>() {\n            @Override\n            public void accept(String s, Producer<String, String> stringStringProducer) {\n                stringStringProducer.flush();\n                stringStringProducer.close();\n            }\n        });\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/kafka/demo/Application.java",
    "content": "package flink.examples.datastream._01.bytedance.split.kafka.demo;\n\n\npublic class Application {\n\n    private String topicName = \"tuzisir\";\n    private String consumerGrp = \"consumerGrp\";\n    private String brokerUrl = \"localhost:9092\";\n\n    public static void main(String[] args) throws InterruptedException {\n\n\n        System.out.println(1);\n\n        Application application = new Application();\n        new Thread(new ProducerThread(application), \"Producer : \").start();\n        new Thread(new ConsumerThread(application), \"Consumer1 : \").start();\n\n        //for multiple consumers in same group, start new consumer threads\n        //new Thread(new ConsumerThread(application), \"Consumer2 : \").start();\n    }\n\n    public String getTopicName() {\n        return topicName;\n    }\n\n    public String getConsumerGrp() {\n        return consumerGrp;\n    }\n\n    public String getBrokerUrl() {\n        return brokerUrl;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/kafka/demo/ConsumerThread.java",
    "content": "package flink.examples.datastream._01.bytedance.split.kafka.demo;\n\nimport java.time.Duration;\nimport java.util.Collections;\nimport java.util.Properties;\n\nimport org.apache.kafka.clients.consumer.Consumer;\nimport org.apache.kafka.clients.consumer.ConsumerRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecords;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\n\n\npublic class ConsumerThread implements Runnable {\n\n    private Consumer<String, String> consumer;\n\n    public ConsumerThread(Application application) {\n        Properties props = new Properties();\n        props.put(\"bootstrap.servers\", application.getBrokerUrl());\n        props.put(\"group.id\", application.getConsumerGrp());\n        props.put(\"enable.auto.commit\", \"true\");\n        props.put(\"auto.commit.interval.ms\", \"1000\");\n        props.put(\"key.deserializer\", \"org.apache.kafka.common.serialization.StringDeserializer\");\n        props.put(\"value.deserializer\", \"org.apache.kafka.common.serialization.StringDeserializer\");\n        //props.put(\"auto.offset.reset\", \"earliest\");\n        consumer = new KafkaConsumer<>(props);\n        consumer.subscribe(Collections.singletonList(application.getTopicName()));\n    }\n\n    @Override\n    public void run() {\n        String threadName = Thread.currentThread().getName();\n        int noMessageToFetch = 1;\n        while (noMessageToFetch < 3) {\n            System.out.println(threadName + \"poll start..\");\n            final ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofSeconds(1));\n            System.out.println(threadName + \"records polled : \" + consumerRecords.count());\n            if (consumerRecords.count() == 0) {\n                noMessageToFetch++;\n                continue;\n            }\n            for (ConsumerRecord<String, String> record : consumerRecords) {\n                System.out.printf(threadName + \"offset = %d, key = %s, value = %s, partition =%d%n\",\n                        record.offset(), record.key(), record.value(), record.partition());\n            }\n            consumer.commitAsync();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/kafka/demo/ProducerThread.java",
    "content": "package flink.examples.datastream._01.bytedance.split.kafka.demo;\n\nimport java.util.Properties;\nimport java.util.concurrent.ExecutionException;\n\nimport org.apache.kafka.clients.producer.KafkaProducer;\nimport org.apache.kafka.clients.producer.Producer;\nimport org.apache.kafka.clients.producer.ProducerRecord;\nimport org.apache.kafka.clients.producer.RecordMetadata;\n\n\npublic class ProducerThread implements Runnable {\n\n    private Producer<String, String> producer;\n    private String topicName;\n\n    public ProducerThread(Application application) {\n        this.topicName = application.getTopicName();\n        Properties props = new Properties();\n        props.put(\"bootstrap.servers\", application.getBrokerUrl());\n        props.put(\"acks\", \"all\");\n        props.put(\"key.serializer\", \"org.apache.kafka.common.serialization.StringSerializer\");\n        props.put(\"value.serializer\", \"org.apache.kafka.common.serialization.StringSerializer\");\n        producer = new KafkaProducer<>(props);\n    }\n\n    @Override\n    public void run() {\n        String threadName = Thread.currentThread().getName();\n        for (int index = 1; index < 100; index++) {\n            final ProducerRecord<String, String> record = new ProducerRecord<>(topicName,\n                    Integer.toString(index), Integer.toString(index));\n            try {\n                RecordMetadata metadata = producer.send(record).get();\n                System.out\n                        .println(threadName + \"Record sent with key \" + index + \" to partition \" + metadata.partition()\n                                + \" with offset \" + metadata.offset());\n            } catch (ExecutionException e) {\n                System.out.println(threadName + \"Error in sending record :\" + e);\n                throw new RuntimeException(e);\n            } catch (InterruptedException e) {\n                System.out.println(threadName + \"Error in sending record : \" + e);\n                throw new RuntimeException(e);\n            } catch (Exception e) {\n                System.out.println(threadName + \"Error in sending record : \" + e);\n                throw new RuntimeException(e);\n            }\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/model/ClientLogSink.java",
    "content": "package flink.examples.datastream._01.bytedance.split.model;\n\nimport lombok.Builder;\nimport lombok.Data;\n\n\n@Data\n@Builder\npublic class ClientLogSink {\n    private int id;\n    private int price;\n    private long timestamp;\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/model/ClientLogSource.java",
    "content": "package flink.examples.datastream._01.bytedance.split.model;\n\nimport lombok.Builder;\nimport lombok.Data;\n\n\n@Data\n@Builder\npublic class ClientLogSource {\n\n    private int id;\n    private int price;\n    private long timestamp;\n    private String date;\n    private String page;\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/model/DynamicProducerRule.java",
    "content": "package flink.examples.datastream._01.bytedance.split.model;\n\n\nimport flink.examples.datastream._01.bytedance.split.codegen.JaninoUtils;\nimport lombok.Builder;\nimport lombok.Data;\n\n\n@Data\n@Builder\npublic class DynamicProducerRule implements Evaluable {\n\n    private String condition;\n\n    private String targetTopic;\n\n    private Evaluable evaluable;\n\n    public void init(Long id) {\n        try {\n            Class<Evaluable> clazz = JaninoUtils.genCodeAndGetClazz(id, targetTopic, condition);\n            this.evaluable = clazz.newInstance();\n        } catch (Exception e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n    @Override\n    public boolean eval(ClientLogSource clientLogSource) {\n        return this.evaluable.eval(clientLogSource);\n    }\n\n    public static void main(String[] args) throws Exception {\n        String condition = \"String.valueOf(sourceModel.getId())==\\\"1\\\"\";\n\n        DynamicProducerRule dynamicProducerRule = DynamicProducerRule\n                .builder()\n                .condition(condition)\n                .targetTopic(\"t\")\n                .build();\n\n        dynamicProducerRule.init(1L);\n\n        boolean b = dynamicProducerRule.eval(ClientLogSource.builder().id(1).build());\n\n        System.out.println();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/model/Evaluable.java",
    "content": "package flink.examples.datastream._01.bytedance.split.model;\n\n\npublic interface Evaluable {\n\n    boolean eval(ClientLogSource clientLogSource);\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/zkconfigcenter/ZkBasedConfigCenter.java",
    "content": "package flink.examples.datastream._01.bytedance.split.zkconfigcenter;\n\nimport java.lang.reflect.Type;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.Set;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.ConcurrentMap;\nimport java.util.function.BiConsumer;\nimport java.util.function.Consumer;\n\nimport org.apache.curator.framework.CuratorFramework;\nimport org.apache.curator.framework.CuratorFrameworkFactory;\nimport org.apache.curator.framework.recipes.cache.TreeCache;\nimport org.apache.curator.framework.recipes.cache.TreeCacheEvent;\nimport org.apache.curator.framework.recipes.cache.TreeCacheListener;\nimport org.apache.curator.retry.RetryOneTime;\n\nimport com.google.common.collect.Sets;\nimport com.google.gson.Gson;\nimport com.google.gson.reflect.TypeToken;\n\nimport flink.examples.datastream._01.bytedance.split.model.DynamicProducerRule;\n\n\npublic class ZkBasedConfigCenter {\n\n    private TreeCache treeCache;\n\n    private CuratorFramework zkClient;\n\n    private static class Factory {\n        private static final ZkBasedConfigCenter INSTANCE = new ZkBasedConfigCenter();\n    }\n\n    public static ZkBasedConfigCenter getInstance() {\n        return Factory.INSTANCE;\n    }\n\n    private ZkBasedConfigCenter() {\n        try {\n            open();\n        } catch (Exception e) {\n            e.printStackTrace();\n            throw new RuntimeException(e);\n        }\n    }\n\n    private ConcurrentMap<Long, DynamicProducerRule> map = new ConcurrentHashMap<>();\n\n    public ConcurrentMap<Long, DynamicProducerRule> getMap() {\n        return map;\n    }\n\n\n    private void setData() throws Exception {\n        String path = \"/kafka-config\";\n        zkClient = CuratorFrameworkFactory.newClient(\"127.0.0.1:2181\", new RetryOneTime(1000));\n        zkClient.start();\n\n        zkClient.setData().forPath(path, (\"{\\n\"\n                + \"  1: {\\n\"\n                + \"    \\\"condition\\\": \\\"1==1\\\",\\n\"\n                + \"    \\\"targetTopic\\\": \\\"tuzisir1\\\"\\n\"\n                + \"  },\\n\"\n                + \"  2: {\\n\"\n                + \"    \\\"condition\\\": \\\"1!=1\\\",\\n\"\n                + \"    \\\"targetTopic\\\": \\\"tuzisir2\\\"\\n\"\n                + \"  }\\n\"\n                + \"}\").getBytes());\n    }\n\n    private void open() throws Exception {\n\n        String path = \"/kafka-config\";\n\n        zkClient = CuratorFrameworkFactory.newClient(\"127.0.0.1:2181\", new RetryOneTime(1000));\n        zkClient.start();\n        // 启动时读取远程配置中心的配置信息\n\n        String json = new String(zkClient.getData().forPath(path));\n\n        this.update(json);\n\n        treeCache = new TreeCache(zkClient, path);\n        treeCache.start();\n        treeCache.getListenable().addListener(new TreeCacheListener() {\n            @Override\n            public void childEvent(CuratorFramework curatorFramework, TreeCacheEvent treeCacheEvent) throws Exception {\n                switch (treeCacheEvent.getType()) {\n                    case NODE_UPDATED:\n                        // 通知的内容：包含路径和值\n                        byte[] data = treeCacheEvent.getData().getData();\n\n                        String json = new String(data);\n\n                        System.out.println(\"配置变化为了：\" + json);\n\n                        // 更新数据\n                        update(json);\n                        break;\n                    default:\n\n                }\n\n            }\n        });\n\n    }\n\n    public void close() {\n        this.treeCache.close();\n        this.zkClient.close();\n    }\n\n    private void update(String json) {\n\n        Map<Long, DynamicProducerRule>\n                result = getNewMap(json);\n\n        Set<Long> needAddId = Sets.difference(result.keySet(), map.keySet()).immutableCopy();\n\n        Set<Long> needDeleteId = Sets.difference(map.keySet(), result.keySet()).immutableCopy();\n\n        needAddId.forEach(new Consumer<Long>() {\n            @Override\n            public void accept(Long id) {\n                DynamicProducerRule dynamicProducerRule = result.get(id);\n                dynamicProducerRule.init(id);\n                map.put(id, dynamicProducerRule);\n            }\n        });\n\n        needDeleteId.forEach(new Consumer<Long>() {\n            @Override\n            public void accept(Long id) {\n                map.remove(id);\n            }\n        });\n    }\n\n    private Map<Long, DynamicProducerRule> getNewMap(String json) {\n\n        Gson gson = new Gson();\n\n        Map<String, DynamicProducerRule> newMap = null;\n\n        Type type = new TypeToken<Map<String, DynamicProducerRule>>() {\n        }.getType();\n\n        newMap = gson.fromJson(json, type);\n\n        Map<Long, DynamicProducerRule> result = new HashMap<>();\n\n        Optional.ofNullable(newMap)\n                .ifPresent(new Consumer<Map<String, DynamicProducerRule>>() {\n                    @Override\n                    public void accept(Map<String, DynamicProducerRule> stringDynamicProducerRuleMap) {\n                        stringDynamicProducerRuleMap.forEach(new BiConsumer<String, DynamicProducerRule>() {\n                            @Override\n                            public void accept(String s, DynamicProducerRule dynamicProducerRule) {\n                                result.put(Long.parseLong(s), dynamicProducerRule);\n                            }\n                        });\n                    }\n                });\n\n\n        return result;\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/zkconfigcenter/new.json",
    "content": "{\"1\":{\"condition\":\"1==1\",\"targetTopic\":\"tuzisir1\"},\"2\":{\"condition\":\"1!=1\",\"targetTopic\":\"tuzisir2\"},\"3\":{\"condition\":\"1==1\",\"targetTopic\":\"tuzisir\"}}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_01/bytedance/split/zkconfigcenter/old.json",
    "content": "{\"1\":{\"condition\":\"1==1\",\"targetTopic\":\"tuzisir1\"},\"2\":{\"condition\":\"1!=1\",\"targetTopic\":\"tuzisir2\"}}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_02/DataStreamTest.java",
    "content": "package flink.examples.datastream._02;\n\nimport java.io.IOException;\nimport java.util.Properties;\nimport java.util.concurrent.TimeUnit;\nimport java.util.function.Consumer;\n\nimport org.apache.commons.lang3.RandomUtils;\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.serialization.AbstractDeserializationSchema;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.tuple.Tuple2;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;\nimport org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.streaming.api.windowing.windows.TimeWindow;\nimport org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;\nimport org.apache.flink.util.Collector;\n\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class DataStreamTest {\n\n    public static void main(String[] args) throws Exception {\n\n        ParameterTool parameters = ParameterTool.fromArgs(args);\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n\n        // 其他参数设置\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameters);\n        env.setMaxParallelism(2);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        env.setParallelism(1);\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);\n\n        Properties properties = new Properties();\n        properties.setProperty(\"bootstrap.servers\", \"localhost:9092\");\n        properties.setProperty(\"group.id\", \"test\");\n\n        DeserializationSchema<Tuple2<String, String>> d = new AbstractDeserializationSchema<Tuple2<String, String>>() {\n\n            @Override\n            public Tuple2<String, String> deserialize(byte[] message) throws IOException {\n                return null;\n            }\n        };\n\n        DataStream<Tuple2<String, String>> stream = env\n                .addSource(new FlinkKafkaConsumer<>(\"topic\", d, properties));\n\n        DataStream<MidModel> eventTimeResult =\n                env\n                        .addSource(new UserDefinedSource())\n                        .assignTimestampsAndWatermarks(\n                                new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.seconds(1L)) {\n                                    @Override\n                                    public long extractTimestamp(SourceModel sourceModel) {\n                                        return sourceModel.getTimestamp();\n                                    }\n                                }\n                        )\n                        .uid(\"source\")\n                        .keyBy(new KeySelector<SourceModel, Integer>() {\n                            @Override\n                            public Integer getKey(SourceModel sourceModel) throws Exception {\n                                return sourceModel.getId();\n                            }\n                        })\n                        // ！！！事件时间窗口\n                        .timeWindow(Time.seconds(1L))\n                        .process(new ProcessWindowFunction<SourceModel, MidModel, Integer, TimeWindow>() {\n                            @Override\n                            public void process(Integer integer, Context context, Iterable<SourceModel> iterable,\n                                    Collector<MidModel> collector) throws Exception {\n\n                                iterable.forEach(new Consumer<SourceModel>() {\n                                    @Override\n                                    public void accept(SourceModel sourceModel) {\n                                        collector.collect(\n                                                MidModel\n                                                        .builder()\n                                                        .id(sourceModel.getId())\n                                                        .price(sourceModel.getPrice())\n                                                        .timestamp(sourceModel.getTimestamp())\n                                                        .build()\n                                        );\n                                    }\n                                });\n                            }\n                        })\n                        .uid(\"process-event-time\");\n\n\n        DataStream<SinkModel> processingTimeResult = eventTimeResult\n                .keyBy(new KeySelector<MidModel, Integer>() {\n                    @Override\n                    public Integer getKey(MidModel midModel) throws Exception {\n                        return midModel.getId();\n                    }\n                })\n                // ！！！处理时间窗口\n                .window(TumblingProcessingTimeWindows.of(Time.seconds(1L)))\n                .process(new ProcessWindowFunction<MidModel, SinkModel, Integer, TimeWindow>() {\n                    @Override\n                    public void process(Integer integer, Context context, Iterable<MidModel> iterable,\n                            Collector<SinkModel> collector) throws Exception {\n\n                        iterable.forEach(new Consumer<MidModel>() {\n                            @Override\n                            public void accept(MidModel midModel) {\n                                collector.collect(\n                                        SinkModel\n                                                .builder()\n                                                .id(midModel.getId())\n                                                .price(midModel.getPrice())\n                                                .timestamp(midModel.getTimestamp())\n                                                .build()\n                                );\n                            }\n                        });\n\n                    }\n                })\n                .uid(\"process-process-time\");\n\n        processingTimeResult.print();\n\n        env.execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private int id;\n        private int price;\n        private long timestamp;\n    }\n\n    @Data\n    @Builder\n    private static class MidModel {\n        private int id;\n        private int price;\n        private long timestamp;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private int id;\n        private int price;\n        private long timestamp;\n    }\n\n    private static class UserDefinedSource implements SourceFunction<SourceModel> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<SourceModel> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n                sourceContext.collect(\n                        SourceModel\n                                .builder()\n                                .id(RandomUtils.nextInt(0, 10))\n                                .price(RandomUtils.nextInt(0, 100))\n                                .timestamp(System.currentTimeMillis())\n                                .build()\n                );\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_02/DataStreamTest1.java",
    "content": "//package flink.examples.datastream._02;\n//\n//import java.io.IOException;\n//import java.util.Properties;\n//import java.util.concurrent.TimeUnit;\n//import java.util.function.Consumer;\n//\n//import org.apache.commons.lang3.RandomUtils;\n//import org.apache.flink.api.common.restartstrategy.RestartStrategies;\n//import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;\n//import org.apache.flink.api.common.serialization.DeserializationSchema;\n//import org.apache.flink.api.java.functions.KeySelector;\n//import org.apache.flink.api.java.tuple.Tuple2;\n//import org.apache.flink.api.java.utils.ParameterTool;\n//import org.apache.flink.streaming.api.CheckpointingMode;\n//import org.apache.flink.streaming.api.TimeCharacteristic;\n//import org.apache.flink.streaming.api.datastream.DataStream;\n//import org.apache.flink.streaming.api.environment.CheckpointConfig;\n//import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\n//import org.apache.flink.streaming.api.functions.source.SourceFunction;\n//import org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;\n//import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;\n//import org.apache.flink.streaming.api.windowing.time.Time;\n//import org.apache.flink.streaming.api.windowing.windows.TimeWindow;\n//import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;\n//import org.apache.flink.util.Collector;\n//\n//import lombok.Builder;\n//import lombok.Data;\n//\n//\n//public class DataStreamTest1 {\n//\n//    public static void main(String[] args) throws Exception {\n//\n//        ParameterTool parameters = ParameterTool.fromArgs(args);\n//\n//        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n//\n//\n//        // 其他参数设置\n//        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n//                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n//        env.getConfig().setGlobalJobParameters(parameters);\n//        env.setMaxParallelism(2);\n//\n//        // ck 设置\n//        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n//        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n//        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n//        env.getCheckpointConfig()\n//                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n//\n//        env.setParallelism(1);\n//\n//        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);\n//\n//        Properties properties = new Properties();\n//        properties.setProperty(\"bootstrap.servers\", \"localhost:9092\");\n//        properties.setProperty(\"group.id\", \"test\");\n//\n//        DeserializationSchema<Tuple2<String, String>> d = new AbstractDeserializationSchema<Tuple2<String, String>>() {\n//\n//            @Override\n//            public Tuple2<String, String> deserialize(byte[] message) throws IOException {\n//                return null;\n//            }\n//        };\n//\n//        DataStream<Tuple2<String, String>> stream = env\n//                .addSource(new FlinkKafkaConsumer<>(\"topic\", d, properties));\n//\n//        DataStream<MidModel> eventTimeResult =\n//                env\n//                        .addSource(new UserDefinedSource())\n//                        .map()\n//                        .flatMap()\n//                        .process()\n//                        .keyBy()\n//                        .sum()\n//\n//\n//        DataStream<SinkModel> processingTimeResult = eventTimeResult\n//                .keyBy(new KeySelector<MidModel, Integer>() {\n//                    @Override\n//                    public Integer getKey(MidModel midModel) throws Exception {\n//                        return midModel.getId();\n//                    }\n//                })\n//                // ！！！处理时间窗口\n//                .window(TumblingProcessingTimeWindows.of(Time.seconds(1L)))\n//                .process(new ProcessWindowFunction<MidModel, SinkModel, Integer, TimeWindow>() {\n//                    @Override\n//                    public void process(Integer integer, Context context, Iterable<MidModel> iterable,\n//                            Collector<SinkModel> collector) throws Exception {\n//\n//                        iterable.forEach(new Consumer<MidModel>() {\n//                            @Override\n//                            public void accept(MidModel midModel) {\n//                                collector.collect(\n//                                        SinkModel\n//                                                .builder()\n//                                                .id(midModel.getId())\n//                                                .price(midModel.getPrice())\n//                                                .timestamp(midModel.getTimestamp())\n//                                                .build()\n//                                );\n//                            }\n//                        });\n//\n//                    }\n//                })\n//                .uid(\"process-process-time\");\n//\n//        processingTimeResult.print();\n//\n//        env.execute();\n//    }\n//\n//    @Data\n//    @Builder\n//    private static class SourceModel {\n//        private int id;\n//        private int price;\n//        private long timestamp;\n//    }\n//\n//    @Data\n//    @Builder\n//    private static class MidModel {\n//        private int id;\n//        private int price;\n//        private long timestamp;\n//    }\n//\n//    @Data\n//    @Builder\n//    private static class SinkModel {\n//        private int id;\n//        private int price;\n//        private long timestamp;\n//    }\n//\n//    private static class UserDefinedSource implements SourceFunction<SourceModel> {\n//\n//        private volatile boolean isCancel;\n//\n//        @Override\n//        public void run(SourceContext<SourceModel> sourceContext) throws Exception {\n//\n//            while (!this.isCancel) {\n//                sourceContext.collect(\n//                        SourceModel\n//                                .builder()\n//                                .id(RandomUtils.nextInt(0, 10))\n//                                .price(RandomUtils.nextInt(0, 100))\n//                                .timestamp(System.currentTimeMillis())\n//                                .build()\n//                );\n//\n//                Thread.sleep(10L);\n//            }\n//\n//        }\n//\n//        @Override\n//        public void cancel() {\n//            this.isCancel = true;\n//        }\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/enums_state/EnumsStateTest.java",
    "content": "package flink.examples.datastream._03.enums_state;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.common.typeutils.base.EnumSerializer;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.core.memory.DataOutputSerializer;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\n\n\npublic class EnumsStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        TypeInformation<StateTestEnums> t = TypeInformation.of(StateTestEnums.class);\n\n        EnumSerializer<StateTestEnums> e = (EnumSerializer<StateTestEnums>) t.createSerializer(env.getConfig());\n\n        DataOutputSerializer d = new DataOutputSerializer(10000);\n\n        e.serialize(StateTestEnums.A, d);\n\n        env.execute();\n    }\n\n    enum StateTestEnums {\n        A,\n        B,\n        C\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/enums_state/SenerioTest.java",
    "content": "package flink.examples.datastream._03.enums_state;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.function.BiConsumer;\n\nimport org.apache.flink.api.common.functions.AggregateFunction;\nimport org.apache.flink.api.common.state.ValueState;\nimport org.apache.flink.api.common.state.ValueStateDescriptor;\nimport org.apache.flink.api.common.typeinfo.TypeHint;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.tuple.Tuple2;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.streaming.api.windowing.windows.TimeWindow;\nimport org.apache.flink.util.Collector;\n\nimport com.google.common.collect.Lists;\n\nimport lombok.Builder;\nimport lombok.Data;\nimport lombok.extern.slf4j.Slf4j;\n\n\n@Slf4j\npublic class SenerioTest {\n\n    public static void main(String[] args) throws Exception {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        Tuple2<DimNameEnum, String> k = Tuple2.of(DimNameEnum.sex, \"男\");\n\n        System.out.println(k.toString());\n\n        env.setParallelism(1);\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        env.addSource(new SourceFunction<SourceModel>() {\n\n            private volatile boolean isCancel = false;\n\n            @Override\n            public void run(SourceContext<SourceModel> ctx) throws Exception {\n\n            }\n\n            @Override\n            public void cancel() {\n                this.isCancel = true;\n            }\n        })\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.minutes(1L)) {\n                    @Override\n                    public long extractTimestamp(SourceModel element) {\n                        return element.getTimestamp();\n                    }\n                })\n                .keyBy(new KeySelector<SourceModel, Long>() {\n                    @Override\n                    public Long getKey(SourceModel value) throws Exception {\n                        return value.getUserId() % 1000;\n                    }\n                })\n                .timeWindow(Time.minutes(1))\n                .aggregate(\n                        new AggregateFunction<SourceModel, Map<Tuple2<DimNameEnum, String>, Long>, Map<Tuple2<DimNameEnum, String>, Long>>() {\n\n                            @Override\n                            public Map<Tuple2<DimNameEnum, String>, Long> createAccumulator() {\n                                return new HashMap<>();\n                            }\n\n                            @Override\n                            public Map<Tuple2<DimNameEnum, String>, Long> add(SourceModel value,\n                                    Map<Tuple2<DimNameEnum, String>, Long> accumulator) {\n\n                                Lists.newArrayList(Tuple2.of(DimNameEnum.province, value.getProvince())\n                                        , Tuple2.of(DimNameEnum.age, value.getAge())\n                                        , Tuple2.of(DimNameEnum.sex, value.getSex()))\n                                        .forEach(t -> {\n                                            Long l = accumulator.get(t);\n\n                                            if (null == l) {\n                                                accumulator.put(t, 1L);\n                                            } else {\n                                                accumulator.put(t, l + 1);\n                                            }\n                                        });\n\n                                return accumulator;\n                            }\n\n                            @Override\n                            public Map<Tuple2<DimNameEnum, String>, Long> getResult(\n                                    Map<Tuple2<DimNameEnum, String>, Long> accumulator) {\n                                return accumulator;\n                            }\n\n                            @Override\n                            public Map<Tuple2<DimNameEnum, String>, Long> merge(\n                                    Map<Tuple2<DimNameEnum, String>, Long> a,\n                                    Map<Tuple2<DimNameEnum, String>, Long> b) {\n                                return null;\n                            }\n                        },\n                        new ProcessWindowFunction<Map<Tuple2<DimNameEnum, String>, Long>, SinkModel, Long, TimeWindow>() {\n\n                            private transient ValueState<Map<Tuple2<DimNameEnum, String>, Long>> todayPv;\n\n                            @Override\n                            public void open(Configuration parameters) throws Exception {\n                                super.open(parameters);\n                                this.todayPv = getRuntimeContext().getState(new ValueStateDescriptor<Map<Tuple2<DimNameEnum, String>, Long>>(\n                                        \"todayPv\", TypeInformation.of(\n                                        new TypeHint<Map<Tuple2<DimNameEnum, String>, Long>>() {\n                                        })));\n                            }\n\n                            @Override\n                            public void process(Long aLong, Context context,\n                                    Iterable<Map<Tuple2<DimNameEnum, String>, Long>> elements, Collector<SinkModel> out)\n                                    throws Exception {\n                                // 将 elements 数据 merge 到 todayPv 中\n                                // 然后 out#collect 出去即可\n\n                                this.todayPv.value()\n                                        .forEach(new BiConsumer<Tuple2<DimNameEnum, String>, Long>() {\n                                            @Override\n                                            public void accept(Tuple2<DimNameEnum, String> k,\n                                                    Long v) {\n                                                log.info(\"key 值：{}，value 值：{}\", k.toString(), v);\n                                            }\n                                        });\n                            }\n                        });\n\n        env.execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String province;\n        private String age;\n        private String sex;\n        private long timestamp;\n    }\n\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private String dimName;\n        private String dimValue;\n        private long timestamp;\n    }\n\n    enum DimNameEnum {\n        province,\n        age,\n        sex,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/StateExamplesTest.java",
    "content": "package flink.examples.datastream._03.state;\n\nimport java.util.LinkedList;\nimport java.util.List;\n\nimport org.apache.flink.api.common.functions.AggregateFunction;\nimport org.apache.flink.api.common.functions.ReduceFunction;\nimport org.apache.flink.api.common.state.AggregatingState;\nimport org.apache.flink.api.common.state.AggregatingStateDescriptor;\nimport org.apache.flink.api.common.state.ListState;\nimport org.apache.flink.api.common.state.ListStateDescriptor;\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.state.ReducingState;\nimport org.apache.flink.api.common.state.ReducingStateDescriptor;\nimport org.apache.flink.api.common.state.StateTtlConfig;\nimport org.apache.flink.api.common.state.ValueState;\nimport org.apache.flink.api.common.state.ValueStateDescriptor;\nimport org.apache.flink.api.common.time.Time;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class StateExamplesTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new ParallelSourceFunction<Item>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Item> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Item.builder()\n                                            .name(\"item\")\n                                            .color(Color.RED)\n                                            .shape(Shape.CIRCLE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, List<Item>> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"itemsMap\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    new ListTypeInfo<>(Item.class));\n\n                    private final ListStateDescriptor<Item> listStateDesc =\n                            new ListStateDescriptor<>(\n                                    \"itemsList\",\n                                    Item.class);\n\n                    private final ValueStateDescriptor<Item> valueStateDesc =\n                            new ValueStateDescriptor<>(\n                                    \"itemsValue\"\n                                    , Item.class);\n\n                    private final ReducingStateDescriptor<String> reducingStateDesc =\n                            new ReducingStateDescriptor<>(\n                                    \"itemsReducing\"\n                                    , new ReduceFunction<String>() {\n                                @Override\n                                public String reduce(String value1, String value2) throws Exception {\n                                    return value1 + value2;\n                                }\n                            }, String.class);\n\n                    private final AggregatingStateDescriptor<Item, String, String> aggregatingStateDesc =\n                            new AggregatingStateDescriptor<Item, String, String>(\"itemsAgg\",\n                                    new AggregateFunction<Item, String, String>() {\n                                        @Override\n                                        public String createAccumulator() {\n                                            return \"\";\n                                        }\n\n                                        @Override\n                                        public String add(Item value, String accumulator) {\n                                            return accumulator + value.name;\n                                        }\n\n                                        @Override\n                                        public String getResult(String accumulator) {\n                                            return accumulator;\n                                        }\n\n                                        @Override\n                                        public String merge(String a, String b) {\n                                            return null;\n                                        }\n                                    }, String.class);\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        mapStateDesc.enableTimeToLive(StateTtlConfig\n                                .newBuilder(Time.milliseconds(1))\n                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n                                .cleanupInRocksdbCompactFilter(10)\n                                .build());\n\n                    }\n\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                        MapState<String, List<Item>> mapState = getRuntimeContext().getMapState(mapStateDesc);\n\n                        List<Item> l = mapState.get(value.name);\n\n                        if (null == l) {\n                            l = new LinkedList<>();\n                        }\n\n                        l.add(value);\n\n                        mapState.put(value.name, l);\n\n                        ListState<Item> listState = getRuntimeContext().getListState(listStateDesc);\n\n                        listState.add(value);\n\n                        Object o = listState.get();\n\n                        ValueState<Item> valueState = getRuntimeContext().getState(valueStateDesc);\n\n                        valueState.update(value);\n\n                        Item i = valueState.value();\n\n                        AggregatingState<Item, String> aggregatingState = getRuntimeContext().getAggregatingState(aggregatingStateDesc);\n\n                        aggregatingState.add(value);\n\n                        String aggResult = aggregatingState.get();\n\n                        ReducingState<String> reducingState = getRuntimeContext().getReducingState(reducingStateDesc);\n\n                        reducingState.add(value.name);\n\n                        String reducingResult = reducingState.get();\n\n                        System.out.println(1);\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_01_broadcast_state/BroadcastStateTest.java",
    "content": "package flink.examples.datastream._03.state._01_broadcast_state;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.common.typeinfo.TypeHint;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.streaming.api.datastream.BroadcastStream;\nimport org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class BroadcastStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        // a map descriptor to store the name of the rule (string) and the rule itself.\n        MapStateDescriptor<String, Rule> ruleStateDescriptor = new MapStateDescriptor<>(\n                \"RulesBroadcastState\",\n                BasicTypeInfo.STRING_TYPE_INFO,\n                TypeInformation.of(new TypeHint<Rule>() {\n                }));\n\n        // broadcast the rules and create the broadcast state\n        BroadcastStream<Rule> ruleBroadcastStream = flinkEnv.env()\n                .addSource(new SourceFunction<Rule>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Rule> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Rule.builder()\n                                            .name(\"rule\" + i)\n                                            .first(Shape.CIRCLE)\n                                            .second(Shape.SQUARE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .setParallelism(1)\n                .broadcast(ruleStateDescriptor);\n\n        flinkEnv.env()\n                .addSource(new ParallelSourceFunction<Item>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Item> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Item.builder()\n                                            .name(\"item\" + i)\n                                            .color(Color.RED)\n                                            .shape(Shape.CIRCLE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .keyBy(new KeySelector<Item, Color>() {\n                    @Override\n                    public Color getKey(Item item) throws Exception {\n                        return item.color;\n                    }\n                })\n                .connect(ruleBroadcastStream)\n                .process(new KeyedBroadcastProcessFunction<Color, Item, Rule, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, List<Item>> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"items\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    new ListTypeInfo<>(Item.class));\n\n                    // identical to our ruleStateDescriptor above\n                    private final MapStateDescriptor<String, Rule> ruleStateDescriptor =\n                            new MapStateDescriptor<>(\n                                    \"RulesBroadcastState\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    TypeInformation.of(new TypeHint<Rule>() {\n                                    }));\n\n                    @Override\n                    public void processBroadcastElement(Rule value,\n                            Context ctx,\n                            Collector<String> out) throws Exception {\n                        ctx.getBroadcastState(ruleStateDescriptor).put(value.name, value);\n                    }\n\n                    @Override\n                    public void processElement(Item value,\n                            ReadOnlyContext ctx,\n                            Collector<String> out) throws Exception {\n\n                        final MapState<String, List<Item>> state = getRuntimeContext().getMapState(mapStateDesc);\n                        final Shape shape = value.getShape();\n\n                        for (Map.Entry<String, Rule> entry\n                                : ctx.getBroadcastState(ruleStateDescriptor).immutableEntries()) {\n                            final String ruleName = entry.getKey();\n                            final Rule rule = entry.getValue();\n\n                            List<Item> stored = state.get(ruleName);\n                            if (stored == null) {\n                                stored = new ArrayList<>();\n                            }\n\n                            if (shape == rule.second && !stored.isEmpty()) {\n                                for (Item i : stored) {\n                                    out.collect(\"MATCH: \" + i + \" - \" + value);\n                                }\n                                stored.clear();\n                            }\n\n                            // there is no else{} to cover if rule.first == rule.second\n                            if (shape.equals(rule.first)) {\n                                stored.add(value);\n                            }\n\n                            if (stored.isEmpty()) {\n                                state.remove(ruleName);\n                            } else {\n                                state.put(ruleName, stored);\n                            }\n                        }\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/CreateStateBackendTest.java",
    "content": "//package flink.examples.datastream._03.state._03_rocksdb;\n//\n//import java.util.LinkedList;\n//import java.util.List;\n//\n//import org.apache.flink.api.common.state.MapState;\n//import org.apache.flink.api.common.state.MapStateDescriptor;\n//import org.apache.flink.api.common.state.StateTtlConfig;\n//import org.apache.flink.api.common.state.StateTtlConfig.TtlTimeCharacteristic;\n//import org.apache.flink.api.common.time.Time;\n//import org.apache.flink.api.java.functions.KeySelector;\n//import org.apache.flink.configuration.Configuration;\n//import org.apache.flink.streaming.api.functions.KeyedProcessFunction;\n//import org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\n//import org.apache.flink.util.Collector;\n//\n//import flink.examples.FlinkEnvUtils;\n//import flink.examples.FlinkEnvUtils.FlinkEnv;\n//import lombok.Builder;\n//import lombok.Data;\n//\n///**\n// * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n// */\n//\n//public class CreateStateBackendTest {\n//\n//\n//    public static void main(String[] args) throws Exception {\n//        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n//\n//        flinkEnv.env().setParallelism(1);\n//\n//        flinkEnv.env()\n//                .addSource(new ParallelSourceFunction<Item>() {\n//\n//                    private volatile boolean isCancel = false;\n//\n//                    @Override\n//                    public void run(SourceContext<Item> ctx) throws Exception {\n//\n//                        int i = 0;\n//\n//                        while (!this.isCancel) {\n//                            ctx.collect(\n//                                    Item.builder()\n//                                            .name(\"item\")\n//                                            .color(Color.RED)\n//                                            .shape(Shape.CIRCLE)\n//                                            .build()\n//                            );\n//                            i++;\n//                            Thread.sleep(1000);\n//                        }\n//                    }\n//\n//                    @Override\n//                    public void cancel() {\n//                        this.isCancel = true;\n//                    }\n//                })\n//                .keyBy(new KeySelector<Item, Integer>() {\n//                    @Override\n//                    public Integer getKey(Item item) throws Exception {\n//                        return item.color.ordinal();\n//                    }\n//                })\n//                .process(new KeyedProcessFunction<Integer, Item, String>() {\n//\n//                    // store partial matches, i.e. first elements of the pair waiting for their second element\n//                    // we keep a list as we may have many first elements waiting\n//                    private MapStateDescriptor<String, String> mapStateDescriptor =\n//                            new MapStateDescriptor<>(\"map state name\", String.class, String.class);\n//\n//                    private transient MapState<String, String> mapState;\n//\n//                    @Override\n//                    public void open(Configuration parameters) throws Exception {\n//                        super.open(parameters);\n//\n//                        StateTtlConfig stateTtlConfig = StateTtlConfig\n//                                // 1.ttl 时长\n//                                .newBuilder(Time.milliseconds(1))\n//\n//                                // 2.更新类型\n//                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n//                                // 创建和写入更新\n//                                .updateTtlOnCreateAndWrite()\n//                                // 读取和写入更新\n//                                .updateTtlOnReadAndWrite()\n//\n//                                // 3.过期状态的访问可见性\n//                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n//                                // 如果还没有被删除就返回\n//                                .returnExpiredIfNotCleanedUp()\n//                                // 过期的永远不返回\n//                                .neverReturnExpired()\n//\n//                                // 4.过期的时间语义\n//                                .setTtlTimeCharacteristic(TtlTimeCharacteristic.ProcessingTime)\n//                                .useProcessingTime()\n//\n//                                // 5.清除策略\n//                                // 做 CK 时把所有状态删除掉\n//                                .cleanupFullSnapshot()\n//                                // 增量删除，只有有状态记录访问时，才会做删除；并且他会加大任务处理延迟。\n//                                // 增量删除仅仅支持 HeapStateBeckend，Rocksdb 不支持！！！\n//                                // 每访问 1 此 state，遍历 1000 条进行删除\n//                                .cleanupIncrementally(1000, true)\n//                                // Rocksdb 状态后端在 rocksdb 做 compaction 时清除过期状态。\n//                                // 做 compaction 时每隔 3 个 entry，重新更新一下时间戳（用于判断是否过期）\n//                                .cleanupInRocksdbCompactFilter(3)\n//                                // 禁用 cleanup\n//                                .disableCleanupInBackground()\n//                                .build();\n//\n//                        this.mapStateDescriptor.enableTimeToLive(stateTtlConfig);\n//                        this.mapState = this.getRuntimeContext().getMapState(mapStateDescriptor);\n//                    }\n//\n//\n//                    @Override\n//                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n//\n//                        MapState<String, List<Item>> mapState = getRuntimeContext().getMapState(mapStateDesc);\n//\n//                        List<Item> l = mapState.get(value.name);\n//\n//                        Object o = mapState.get(\"测试\");\n//\n//                        if (null == l) {\n//                            l = new LinkedList<>();\n//                        }\n//\n//                        l.add(value);\n//\n//                        mapState.put(value.name, l);\n//\n//\n//\n//                    }\n//                })\n//                .print();\n//\n//\n//        flinkEnv.env().execute(\"广播状态测试任务\");\n//\n//    }\n//\n//    @Builder\n//    @Data\n//    private static class Rule {\n//        private String name;\n//        private Shape first;\n//        private Shape second;\n//    }\n//\n//    @Builder\n//    @Data\n//    private static class Item {\n//        private String name;\n//        private Shape shape;\n//        private Color color;\n//\n//    }\n//\n//\n//    private enum Shape {\n//        CIRCLE,\n//        SQUARE\n//        ;\n//    }\n//\n//    private enum Color {\n//        RED,\n//        BLUE,\n//        BLACK,\n//        ;\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/GettingStartDemo.java",
    "content": "package flink.examples.datastream._03.state._03_rocksdb;\n\nimport org.rocksdb.Options;\nimport org.rocksdb.RocksDB;\nimport org.rocksdb.RocksDBException;\n\npublic class GettingStartDemo {\n\n    // 因为RocksDB是由C++编写的，在Java中使用首先需要加载Native库\n    static {\n        // Loads the necessary library files.\n        // Calling this method twice will have no effect.\n        // By default the method extracts the shared library for loading at\n        // java.io.tmpdir, however, you can override this temporary location by\n        // setting the environment variable ROCKSDB_SHAREDLIB_DIR.\n        // 默认这个方法会加压一个共享库到java.io.tmpdir\n        RocksDB.loadLibrary();\n    }\n\n    public static void main(String[] args) throws RocksDBException {\n        // 1. 打开数据库\n        // 1.1 创建数据库配置\n        Options dbOpt = new Options();\n        // 1.2 配置当数据库不存在时自动创建\n        dbOpt.setCreateIfMissing(true);\n        // 1.3 打开数据库。因为RocksDB默认是保存在本地磁盘，所以需要指定位置\n        RocksDB rdb = RocksDB.open(dbOpt, \"./data/rocksdb\");\n        // 2. 写入数据\n        // 2.1 RocksDB都是以字节流的方式写入数据库中，所以我们需要将字符串转换为字节流再写入。这点类似于HBase\n        byte[] key = \"zhangsan\".getBytes();\n        byte[] value = \"20\".getBytes();\n        // 2.2 调用put方法写入数据\n        rdb.put(key, value);\n        System.out.println(\"写入数据到RocksDB完成！\");\n        // 3. 调用delete方法读取数据\n        System.out.println(\"从RocksDB读取key = \" + new String(key) + \"的value为\" + new String(rdb.get(key)));\n        // 4. 移除数据\n        rdb.delete(key);\n        // 关闭资源\n        rdb.close();\n        dbOpt.close();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/Rocksdb_OperatorAndKeyedState_StateStorageDIr_Test.java",
    "content": "package flink.examples.datastream._03.state._03_rocksdb;\n\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.ListState;\nimport org.apache.flink.api.common.state.ListStateDescriptor;\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.state.StateTtlConfig;\nimport org.apache.flink.api.common.state.StateTtlConfig.StateVisibility;\nimport org.apache.flink.api.common.state.StateTtlConfig.UpdateType;\nimport org.apache.flink.api.common.time.Time;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.runtime.state.FunctionInitializationContext;\nimport org.apache.flink.runtime.state.FunctionSnapshotContext;\nimport org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class Rocksdb_OperatorAndKeyedState_StateStorageDIr_Test {\n\n\n    public static void main(String[] args) throws Exception {\n\n//        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--execution.savepoint.path\", \"\"});\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new UserDefinedSource())\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.name.hashCode();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, Item> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"key1\",\n                                    String.class\n                                    , Item.class);\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        mapStateDesc.enableTimeToLive(StateTtlConfig\n                                .newBuilder(Time.hours(24))\n                                .setUpdateType(UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateVisibility.NeverReturnExpired)\n                                .cleanupFullSnapshot()\n                                .build());\n                    }\n\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                        MapState<String, Item> mapState = getRuntimeContext().getMapState(mapStateDesc);\n\n                        mapState.put(value.name, value);\n\n                        out.collect(value.name);\n\n                    }\n                })\n                .keyBy(new KeySelector<String, Integer>() {\n                    @Override\n                    public Integer getKey(String value) throws Exception {\n                        return value.hashCode();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, String, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, String> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"key2\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    BasicTypeInfo.STRING_TYPE_INFO);\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        mapStateDesc.enableTimeToLive(StateTtlConfig\n                                .newBuilder(Time.hours(24))\n                                .setUpdateType(UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateVisibility.NeverReturnExpired)\n                                .cleanupFullSnapshot()\n                                .build());\n                    }\n\n\n                    @Override\n                    public void processElement(String value, Context ctx, Collector<String> out) throws Exception {\n                        MapState<String, String> mapState = getRuntimeContext().getMapState(mapStateDesc);\n\n                        mapState.put(value, value);\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n    private static class UserDefinedSource extends RichParallelSourceFunction<Item>\n            implements CheckpointedFunction {\n\n        private final ListStateDescriptor<Item> listStateDescriptor =\n                new ListStateDescriptor<Item>(\"a\", Item.class);\n\n        private volatile boolean isCancel = false;\n\n        private transient ListState<Item> l;\n\n        @Override\n        public void run(SourceContext<Item> ctx) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n                ctx.collect(\n                        Item.builder()\n                                .name(\"item\" + i)\n                                .color(Color.RED)\n                                .shape(Shape.CIRCLE)\n                                .build()\n                );\n                i++;\n\n                List<Item> items = (List<Item>) l.get();\n\n                items.add(Item.builder()\n                        .name(\"item\")\n                        .color(Color.RED)\n                        .shape(Shape.CIRCLE)\n                        .build());\n\n                Thread.sleep(1);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public void snapshotState(FunctionSnapshotContext context) throws Exception {\n            System.out.println(1);\n        }\n\n        @Override\n        public void initializeState(FunctionInitializationContext context) throws Exception {\n            this.l = context.getOperatorStateStore().getListState(listStateDescriptor);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/keyed_state/RocksBackendKeyedMapStateTest.java",
    "content": "package flink.examples.datastream._03.state._03_rocksdb.keyed_state;\n\nimport java.util.LinkedList;\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.state.StateTtlConfig;\nimport org.apache.flink.api.common.state.StateTtlConfig.TtlTimeCharacteristic;\nimport org.apache.flink.api.common.time.Time;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class RocksBackendKeyedMapStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new ParallelSourceFunction<Item>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Item> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Item.builder()\n                                            .name(\"item\")\n                                            .color(Color.RED)\n                                            .shape(Shape.CIRCLE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, List<Item>> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"a\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    new ListTypeInfo<>(Item.class));\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, List<Item>> mapStateDescb =\n                            new MapStateDescriptor<>(\n                                    \"b\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    new ListTypeInfo<>(Item.class));\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        mapStateDesc.enableTimeToLive(StateTtlConfig\n                                .newBuilder(Time.milliseconds(1))\n                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n                                .cleanupInRocksdbCompactFilter(10)\n                                .build());\n\n                        StateTtlConfig\n                                // 1.ttl 时长\n                                .newBuilder(Time.milliseconds(1))\n\n                                // 2.更新类型\n                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n                                // 创建和写入更新\n                                .updateTtlOnCreateAndWrite()\n                                // 读取和写入更新\n                                .updateTtlOnReadAndWrite()\n\n                                // 3.过期状态的访问可见性\n                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n                                // 如果还没有被删除就返回\n                                .returnExpiredIfNotCleanedUp()\n                                // 过期的永远不返回\n                                .neverReturnExpired()\n\n                                // 4.过期的时间语义\n                                .setTtlTimeCharacteristic(TtlTimeCharacteristic.ProcessingTime)\n                                .useProcessingTime()\n\n                                // 5.清除策略\n                                // 从 cp 或 sp 恢复时清除过期状态\n                                .cleanupFullSnapshot()\n                                // 增量删除，只有有状态记录访问时，才会做删除；并且他会加大任务处理延迟。\n                                // 增量删除仅仅支持 HeapStateBeckend，Rocksdb 不支持！！！\n                                // 每访问 1 此 state，遍历 1000 条进行删除\n                                .cleanupIncrementally(1000, true)\n                                // Rocksdb 状态后端在 rocksdb 做 compaction 时清除过期状态。\n                                // 做 compaction 时每隔 3 个 entry，重新更新一下时间戳（用于判断是否过期）\n                                .cleanupInRocksdbCompactFilter(3)\n                                // 禁用 cleanup\n                                .disableCleanupInBackground()\n                                .build();\n\n                    }\n\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                        MapState<String, List<Item>> mapState = getRuntimeContext().getMapState(mapStateDesc);\n\n                        List<Item> l = mapState.get(value.name);\n\n                        Object o = mapState.get(\"测试\");\n\n                        if (null == l) {\n                            l = new LinkedList<>();\n                        }\n\n                        l.add(value);\n\n                        mapState.put(value.name, l);\n\n\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/keyed_state/RocksBackendKeyedValueStateTest.java",
    "content": "package flink.examples.datastream._03.state._03_rocksdb.keyed_state;\n\nimport java.util.LinkedList;\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.StateTtlConfig;\nimport org.apache.flink.api.common.state.ValueState;\nimport org.apache.flink.api.common.state.ValueStateDescriptor;\nimport org.apache.flink.api.common.time.Time;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class RocksBackendKeyedValueStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new ParallelSourceFunction<Item>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Item> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Item.builder()\n                                            .name(\"item\")\n                                            .color(Color.RED)\n                                            .shape(Shape.CIRCLE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final ValueStateDescriptor<List<Item>> valueStateDesc =\n                            new ValueStateDescriptor<>(\n                                    \"items\"\n                                    , new ListTypeInfo<>(Item.class));\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        valueStateDesc.enableTimeToLive(StateTtlConfig\n                                .newBuilder(Time.milliseconds(1))\n                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n                                .cleanupInRocksdbCompactFilter(10)\n                                .build());\n\n                    }\n\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                        ValueState<List<Item>> valueState = getRuntimeContext().getState(valueStateDesc);\n\n                        List<Item> l = valueState.value();\n\n                        if (null == l) {\n                            l = new LinkedList<>();\n                        }\n\n                        l.add(value);\n\n                        valueState.update(l);\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/operator_state/KeyedStreamOperatorListStateTest.java",
    "content": "package flink.examples.datastream._03.state._03_rocksdb.operator_state;\n\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.ListState;\nimport org.apache.flink.api.common.state.ListStateDescriptor;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.runtime.state.FunctionInitializationContext;\nimport org.apache.flink.runtime.state.FunctionSnapshotContext;\nimport org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport com.google.common.collect.Lists;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class KeyedStreamOperatorListStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new UserDefinedSource())\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new UserDefinedKeyPF())\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n    private static class UserDefinedSource extends RichParallelSourceFunction<Item>\n            implements CheckpointedFunction {\n\n        private final ListStateDescriptor<Item> listStateDescriptor =\n                new ListStateDescriptor<Item>(\"a\", Item.class);\n\n        private volatile boolean isCancel = false;\n\n        private transient ListState<Item> l;\n\n        @Override\n        public void run(SourceContext<Item> ctx) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n                ctx.collect(\n                        Item.builder()\n                                .name(\"item\")\n                                .color(Color.RED)\n                                .shape(Shape.CIRCLE)\n                                .build()\n                );\n                i++;\n\n                List<Item> items = (List<Item>) l.get();\n\n                items.add(Item.builder()\n                        .name(\"item\")\n                        .color(Color.RED)\n                        .shape(Shape.CIRCLE)\n                        .build());\n\n                l.update(items);\n\n\n                Thread.sleep(1000);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public void snapshotState(FunctionSnapshotContext context) throws Exception {\n            System.out.println(1);\n        }\n\n        @Override\n        public void initializeState(FunctionInitializationContext context) throws Exception {\n            this.l = context.getOperatorStateStore().getListState(listStateDescriptor);\n        }\n    }\n\n    private static class UserDefinedKeyPF extends KeyedProcessFunction<Integer, Item, String> implements CheckpointedFunction {\n\n        private final ListStateDescriptor<Item> listStateDescriptor =\n                new ListStateDescriptor<Item>(\"b\", Item.class);\n\n        private ListState<Item> listState;\n\n        @Override\n        public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n            this.listState.update(Lists.newArrayList(value));\n        }\n\n        @Override\n        public void snapshotState(FunctionSnapshotContext context) throws Exception {\n            System.out.println(1);\n        }\n\n        @Override\n        public void initializeState(FunctionInitializationContext context) throws Exception {\n            this.listState = context.getKeyedStateStore().getListState(listStateDescriptor);\n            this.listState = context.getOperatorStateStore().getListState(listStateDescriptor);\n        }\n    }\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_03_rocksdb/operator_state/RocksBackendOperatorListStateTest.java",
    "content": "package flink.examples.datastream._03.state._03_rocksdb.operator_state;\n\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.ListState;\nimport org.apache.flink.api.common.state.ListStateDescriptor;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.runtime.state.FunctionInitializationContext;\nimport org.apache.flink.runtime.state.FunctionSnapshotContext;\nimport org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class RocksBackendOperatorListStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new UserDefinedSource())\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n    private static class UserDefinedSource extends RichParallelSourceFunction<Item>\n            implements CheckpointedFunction {\n\n        private final ListStateDescriptor<Item> listStateDescriptor =\n                new ListStateDescriptor<Item>(\"a\", Item.class);\n\n        private volatile boolean isCancel = false;\n\n        private transient ListState<Item> l;\n\n        @Override\n        public void run(SourceContext<Item> ctx) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n                ctx.collect(\n                        Item.builder()\n                                .name(\"item\")\n                                .color(Color.RED)\n                                .shape(Shape.CIRCLE)\n                                .build()\n                );\n                i++;\n\n                List<Item> items = (List<Item>) l.get();\n\n                items.add(Item.builder()\n                        .name(\"item\")\n                        .color(Color.RED)\n                        .shape(Shape.CIRCLE)\n                        .build());\n\n                l.update(items);\n\n\n                Thread.sleep(1000);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public void snapshotState(FunctionSnapshotContext context) throws Exception {\n            System.out.println(1);\n        }\n\n        @Override\n        public void initializeState(FunctionInitializationContext context) throws Exception {\n            this.l = context.getOperatorStateStore().getListState(listStateDescriptor);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_04_filesystem/keyed_state/FsStateBackendKeyedMapStateTest.java",
    "content": "package flink.examples.datastream._03.state._04_filesystem.keyed_state;\n\nimport java.util.LinkedList;\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class FsStateBackendKeyedMapStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--state.backend\", \"filesystem\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new ParallelSourceFunction<Item>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Item> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Item.builder()\n                                            .name(\"item\")\n                                            .color(Color.RED)\n                                            .shape(Shape.CIRCLE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return 0;\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, List<Item>> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"items\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    new ListTypeInfo<>(Item.class));\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n//                        mapStateDesc.enableTimeToLive(StateTtlConfig\n//                                .newBuilder(Time.hours(1))\n//                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n//                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n//                                .cleanupInRocksdbCompactFilter(10)\n//                                .build());\n\n                    }\n\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                        MapState<String, List<Item>> mapState = getRuntimeContext().getMapState(mapStateDesc);\n\n                        List<Item> l = mapState.get(value.name);\n\n                        Object o = mapState.get(\"测试\");\n\n                        if (null == l) {\n                            l = new LinkedList<>();\n                        }\n\n                        l.add(value);\n\n                        mapState.put(value.name, l);\n\n\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_04_filesystem/operator_state/FsStateBackendOperatorListStateTest.java",
    "content": "package flink.examples.datastream._03.state._04_filesystem.operator_state;\n\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.ListState;\nimport org.apache.flink.api.common.state.ListStateDescriptor;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.runtime.state.FunctionInitializationContext;\nimport org.apache.flink.runtime.state.FunctionSnapshotContext;\nimport org.apache.flink.streaming.api.checkpoint.CheckpointedFunction;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class FsStateBackendOperatorListStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--state.backend\", \"filesystem\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new UserDefinedSource())\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n    private static class UserDefinedSource extends RichParallelSourceFunction<Item>\n            implements CheckpointedFunction {\n\n        private final ListStateDescriptor<Item> listStateDescriptor =\n                new ListStateDescriptor<Item>(\"a\", Item.class);\n\n        private volatile boolean isCancel = false;\n\n        private transient ListState<Item> l;\n\n        @Override\n        public void run(SourceContext<Item> ctx) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n                ctx.collect(\n                        Item.builder()\n                                .name(\"item\")\n                                .color(Color.RED)\n                                .shape(Shape.CIRCLE)\n                                .build()\n                );\n                i++;\n\n                List<Item> items = (List<Item>) l.get();\n\n                items.add(Item.builder()\n                        .name(\"item\")\n                        .color(Color.RED)\n                        .shape(Shape.CIRCLE)\n                        .build());\n\n                l.update(items);\n\n\n                Thread.sleep(1000);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public void snapshotState(FunctionSnapshotContext context) throws Exception {\n            System.out.println(1);\n        }\n\n        @Override\n        public void initializeState(FunctionInitializationContext context) throws Exception {\n            this.l = context.getOperatorStateStore().getListState(listStateDescriptor);\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_03/state/_05_memory/keyed_state/MemoryStateBackendKeyedMapStateTest.java",
    "content": "package flink.examples.datastream._03.state._05_memory.keyed_state;\n\nimport java.util.LinkedList;\nimport java.util.List;\n\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.state.StateTtlConfig;\nimport org.apache.flink.api.common.time.Time;\nimport org.apache.flink.api.common.typeinfo.BasicTypeInfo;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/broadcast_state/\n */\n\npublic class MemoryStateBackendKeyedMapStateTest {\n\n\n    public static void main(String[] args) throws Exception {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--state.backend\", \"jobmanager\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new ParallelSourceFunction<Item>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<Item> ctx) throws Exception {\n\n                        int i = 0;\n\n                        while (!this.isCancel) {\n                            ctx.collect(\n                                    Item.builder()\n                                            .name(\"item\")\n                                            .color(Color.RED)\n                                            .shape(Shape.CIRCLE)\n                                            .build()\n                            );\n                            i++;\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .keyBy(new KeySelector<Item, Integer>() {\n                    @Override\n                    public Integer getKey(Item item) throws Exception {\n                        return item.color.ordinal();\n                    }\n                })\n                .process(new KeyedProcessFunction<Integer, Item, String>() {\n\n                    // store partial matches, i.e. first elements of the pair waiting for their second element\n                    // we keep a list as we may have many first elements waiting\n                    private final MapStateDescriptor<String, List<Item>> mapStateDesc =\n                            new MapStateDescriptor<>(\n                                    \"items\",\n                                    BasicTypeInfo.STRING_TYPE_INFO,\n                                    new ListTypeInfo<>(Item.class));\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        mapStateDesc.enableTimeToLive(StateTtlConfig\n                                .newBuilder(Time.milliseconds(1))\n                                .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)\n                                .cleanupInRocksdbCompactFilter(10)\n                                .build());\n\n                    }\n\n\n                    @Override\n                    public void processElement(Item value, Context ctx, Collector<String> out) throws Exception {\n\n                        MapState<String, List<Item>> mapState = getRuntimeContext().getMapState(mapStateDesc);\n\n                        List<Item> l = mapState.get(value.name);\n\n                        Object o = mapState.get(\"测试\");\n\n                        if (null == l) {\n                            l = new LinkedList<>();\n                        }\n\n                        l.add(value);\n\n                        mapState.put(value.name, l);\n\n\n\n                    }\n                })\n                .print();\n\n\n        flinkEnv.env().execute(\"广播状态测试任务\");\n\n    }\n\n    @Builder\n    @Data\n    private static class Rule {\n        private String name;\n        private Shape first;\n        private Shape second;\n    }\n\n    @Builder\n    @Data\n    private static class Item {\n        private String name;\n        private Shape shape;\n        private Color color;\n\n    }\n\n\n    private enum Shape {\n        CIRCLE,\n        SQUARE\n        ;\n    }\n\n    private enum Color {\n        RED,\n        BLUE,\n        BLACK,\n        ;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_04/keyed_co_process/HashMapTest.java",
    "content": "package flink.examples.datastream._04.keyed_co_process;\n\n\nimport java.util.HashMap;\nimport java.util.Map.Entry;\n\npublic class HashMapTest {\n\n    public static void main(String[] args) {\n        HashMap<String, String> hashMap = new HashMap<>();\n\n        hashMap.put(\"1\", \"2\");\n        hashMap.put(\"2\", \"2\");\n        hashMap.put(\"3\", \"2\");\n        hashMap.put(\"4\", \"2\");\n        hashMap.put(\"5\", \"2\");\n\n        for (Entry<String, String> e : hashMap.entrySet()) {\n            hashMap.remove(e.getKey());\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_04/keyed_co_process/_04_KeyedCoProcessFunctionTest.java",
    "content": "package flink.examples.datastream._04.keyed_co_process;\n\nimport java.util.Map.Entry;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.common.state.StateTtlConfig;\nimport org.apache.flink.api.common.state.StateTtlConfig.StateVisibility;\nimport org.apache.flink.api.common.state.StateTtlConfig.UpdateType;\nimport org.apache.flink.api.common.time.Time;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.datastream.KeyedStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.co.KeyedCoProcessFunction;\nimport org.apache.flink.streaming.api.functions.sink.SinkFunction;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport com.twitter.chill.protobuf.ProtobufSerializer;\n\nimport flink.examples.JacksonUtils;\nimport flink.examples.datastream._04.keyed_co_process.protobuf.Source;\nimport flink.examples.sql._05.format.formats.protobuf.Test;\n\npublic class _04_KeyedCoProcessFunctionTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        env.registerTypeWithKryoSerializer(Source.class, ProtobufSerializer.class);\n        env.registerTypeWithKryoSerializer(Test.class, ProtobufSerializer.class);\n\n\n        KeyedStream<Source, Integer> source1 = env\n                .addSource(new UserDefineSource1())\n                .uid(\"source1\")\n                .keyBy(new KeySelector<Source, Integer>() {\n                    @Override\n                    public Integer getKey(Source value) throws Exception {\n                        return value.getName().hashCode() % 1024;\n                    }\n                });\n\n        KeyedStream<Source, Integer> source2 = env\n                .addSource(new UserDefineSource2())\n                .uid(\"source2\")\n                .keyBy(new KeySelector<Source, Integer>() {\n                    @Override\n                    public Integer getKey(Source value) throws Exception {\n                        return value.getName().hashCode() % 1024;\n                    }\n                });\n\n        source1.connect(source2)\n                .process(new KeyedCoProcessFunction<Integer, Source, Source, Test>() {\n\n                    private transient MapState<String, Source>\n                            source1State;\n\n                    private transient MapState<String, Source>\n                            source2State;\n\n                    private StateTtlConfig getStateTtlConfig() {\n                        return StateTtlConfig\n                                .newBuilder(Time.hours(1))\n                                .setUpdateType(UpdateType.OnCreateAndWrite)\n                                .setStateVisibility(StateVisibility.NeverReturnExpired)\n                                .cleanupIncrementally(3, true)\n                                .build();\n                    }\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        MapStateDescriptor<String, Source>\n                                source1StateDescriptor =\n                                new MapStateDescriptor<String, Source>(\n                                        \"source1State\"\n                                        , TypeInformation.of(String.class)\n                                        , TypeInformation.of(Source.class));\n\n                        source1StateDescriptor\n                                .enableTimeToLive(getStateTtlConfig());\n\n                        this.source1State =\n                                getRuntimeContext().getMapState(source1StateDescriptor);\n\n                        MapStateDescriptor<String, Source>\n                                source2StateDescriptor =\n                                new MapStateDescriptor<String, Source>(\n                                        \"source2State\"\n                                        , TypeInformation.of(String.class)\n                                        , TypeInformation.of(Source.class));\n\n                        source2StateDescriptor\n                                .enableTimeToLive(getStateTtlConfig());\n\n                        this.source2State =\n                                getRuntimeContext().getMapState(source2StateDescriptor);\n                    }\n\n                    @Override\n                    public void processElement1(Source value, Context ctx, Collector<Test> out) throws Exception {\n                        ctx.timerService().registerProcessingTimeTimer(System.currentTimeMillis() + 10000);\n                        this.source1State.put(value.getName(), value);\n                    }\n\n                    @Override\n                    public void processElement2(Source value, Context ctx, Collector<Test> out) throws Exception {\n                        ctx.timerService().registerProcessingTimeTimer(System.currentTimeMillis() + 10000);\n                        this.source2State.put(value.getName(), value);\n                    }\n\n                    @Override\n                    public void onTimer(long timestamp, OnTimerContext ctx, Collector<Test> out) throws Exception {\n\n                        for (Entry<String, Source> e : this.source1State.entries()) {\n                            this.source1State.remove(e.getKey());\n                            out.collect(Test\n                                    .newBuilder()\n                                    .setName(e.getValue().getName())\n                                    .build());\n                        }\n\n                        for (Entry<String, Source> e : this.source2State.entries()) {\n                            this.source1State.remove(e.getKey());\n                            out.collect(Test\n                                    .newBuilder()\n                                    .setName(e.getValue().getName())\n                                    .build());\n                        }\n\n//                        this.source1State.iterator()\n//                                .forEachRemaining(a -> {\n//                                    out.collect(Test\n//                                            .newBuilder()\n//                                            .setName(a.getValue().getName())\n//                                            .build()\n//                                    );\n//                                    try {\n//                                        this.source1State.remove(a.getKey());\n//                                    } catch (Exception e) {\n//                                        e.printStackTrace();\n//                                    }\n//                                });\n//\n//                        this.source2State.iterator()\n//                                .forEachRemaining(a -> {\n//                                    out.collect(Test\n//                                            .newBuilder()\n//                                            .setName(a.getValue().getName())\n//                                            .build()\n//                                    );\n//                                    try {\n//                                        this.source2State.remove(a.getKey());\n//                                    } catch (Exception e) {\n//                                        e.printStackTrace();\n//                                    }\n//                                });\n                    }\n                })\n                .uid(\"process\")\n                .disableChaining()\n                .addSink(new SinkFunction<Test>() {\n                    @Override\n                    public void invoke(Test value, Context context) throws Exception {\n                        System.out.println(JacksonUtils.bean2Json(value));\n                    }\n                })\n                .uid(\"sink\");\n\n        env.execute(\"KeyedCoProcessFunction 测试\");\n    }\n\n\n    private static class UserDefineSource1 extends RichSourceFunction<Source> {\n\n        private volatile boolean isCancel = false;\n\n        @Override\n        public void run(SourceContext<Source> ctx) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n                ctx.collect(\n                        Source.newBuilder()\n                                .setName(\"antigenral-from-source-\" + i)\n                                .build()\n                );\n                i++;\n\n                if (i == 20) {\n                    i = 0;\n                }\n                Thread.sleep(100);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n    }\n\n    private static class UserDefineSource2 extends RichSourceFunction<Source> {\n\n        private volatile boolean isCancel = false;\n\n        @Override\n        public void run(SourceContext<Source> ctx) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n                ctx.collect(\n                        Source.getDefaultInstance()\n                );\n                i++;\n\n                if (i == 20) {\n                    i = 0;\n                }\n                Thread.sleep(100);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_05_ken/_01_watermark/WatermarkTest.java",
    "content": "package flink.examples.datastream._05_ken._01_watermark;\n\nimport java.util.HashSet;\nimport java.util.Set;\nimport java.util.function.Consumer;\n\nimport org.apache.flink.api.common.functions.FilterFunction;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;\nimport org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.streaming.api.windowing.windows.TimeWindow;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class WatermarkTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(8);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<SourceModel>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<SourceModel> ctx) throws Exception {\n                        while (!isCancel) {\n                            // xxx 日志上报逻辑\n                            ctx.collect(\n                                    SourceModel\n                                            .builder()\n                                            .page(\"Shopping-Cart\")\n                                            .build()\n                            );\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .filter(new FilterFunction<SourceModel>() {\n                    @Override\n                    public boolean filter(SourceModel value) throws Exception {\n                        return value.getPage().equals(\"Shopping-Cart\");\n                    }\n                })\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.minutes(1)) {\n                    @Override\n                    public long extractTimestamp(SourceModel element) {\n                        return element.getTime();\n                    }\n                })\n                .keyBy(new KeySelector<SourceModel, Long>() {\n                    @Override\n                    public Long getKey(SourceModel value) throws Exception {\n                        return 0L;\n                    }\n                })\n                .window(TumblingEventTimeWindows.of(Time.minutes(1)))\n                .process(new ProcessWindowFunction<SourceModel, SinkModel, Long, TimeWindow>() {\n                    @Override\n                    public void process(Long aLong, Context context, Iterable<SourceModel> elements,\n                            Collector<SinkModel> out) throws Exception {\n\n                        long windowStart = context.window().getStart();\n\n                        Set<Long> s = new HashSet<>();\n\n                        elements.forEach(new Consumer<SourceModel>() {\n                            @Override\n                            public void accept(SourceModel sourceModel) {\n                                s.add(sourceModel.userId);\n                            }\n                        });\n\n                        out.collect(\n                                SinkModel\n                                        .builder()\n                                        .uv(s.size())\n                                        .time(windowStart)\n                                        .build()\n                        );\n                    }\n                })\n                .print();\n\n        flinkEnv.env().execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String page;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private long uv;\n        private long time;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_06_test/_01_event_proctime/OneJobWIthProcAndEventTimeWIndowTest.java",
    "content": "package flink.examples.datastream._06_test._01_event_proctime;\n\nimport java.util.HashSet;\nimport java.util.Set;\nimport java.util.function.Consumer;\n\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;\nimport org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;\nimport org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.streaming.api.windowing.windows.TimeWindow;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class OneJobWIthProcAndEventTimeWIndowTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<SourceModel>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<SourceModel> ctx) throws Exception {\n                        while (!isCancel) {\n                            // xxx 日志上报逻辑\n                            ctx.collect(\n                                    SourceModel\n                                            .builder()\n                                            .page(\"Shopping-Cart\")\n                                            .userId(1)\n                                            .time(System.currentTimeMillis())\n                                            .build()\n                            );\n                            Thread.sleep(100);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.seconds(1)) {\n                    @Override\n                    public long extractTimestamp(SourceModel element) {\n                        return element.getTime();\n                    }\n                })\n                .keyBy(new KeySelector<SourceModel, Long>() {\n                    @Override\n                    public Long getKey(SourceModel value) throws Exception {\n                        return 0L;\n                    }\n                })\n                .window(TumblingEventTimeWindows.of(Time.seconds(10)))\n                .process(new ProcessWindowFunction<SourceModel, MiddleModel, Long, TimeWindow>() {\n                    @Override\n                    public void process(Long aLong, Context context, Iterable<SourceModel> elements,\n                            Collector<MiddleModel> out) throws Exception {\n\n                        long windowStart = context.window().getStart();\n\n                        Set<Long> s = new HashSet<>();\n\n                        elements.forEach(new Consumer<SourceModel>() {\n                            @Override\n                            public void accept(SourceModel sourceModel) {\n                                s.add(sourceModel.userId);\n                            }\n                        });\n\n                        out.collect(\n                                MiddleModel\n                                        .builder()\n                                        .uv(s.size())\n                                        .time(windowStart)\n                                        .build()\n                        );\n                    }\n                })\n                .keyBy(new KeySelector<MiddleModel, Integer>() {\n                    @Override\n                    public Integer getKey(MiddleModel value) throws Exception {\n                        return 0;\n                    }\n                })\n                .window(TumblingProcessingTimeWindows.of(Time.seconds(10)))\n                .process(new ProcessWindowFunction<MiddleModel, SinkModel, Integer, TimeWindow>() {\n                    @Override\n                    public void process(Integer integer, Context context, Iterable<MiddleModel> elements,\n                            Collector<SinkModel> out) throws Exception {\n                        System.out.println(1);\n                    }\n                })\n                .print();\n\n        flinkEnv.env().execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String page;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class MiddleModel {\n        private long uv;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private long uv;\n        private long time;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_06_test/_01_event_proctime/OneJobWIthTimerTest.java",
    "content": "package flink.examples.datastream._06_test._01_event_proctime;\n\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.streaming.api.functions.KeyedProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class OneJobWIthTimerTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<SourceModel>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<SourceModel> ctx) throws Exception {\n                        while (!isCancel) {\n                            // xxx 日志上报逻辑\n                            ctx.collect(\n                                    SourceModel\n                                            .builder()\n                                            .page(\"Shopping-Cart\")\n                                            .userId(1)\n                                            .time(System.currentTimeMillis())\n                                            .build()\n                            );\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.seconds(1)) {\n                    @Override\n                    public long extractTimestamp(SourceModel element) {\n                        return element.getTime();\n                    }\n                })\n                .keyBy(new KeySelector<SourceModel, Long>() {\n                    @Override\n                    public Long getKey(SourceModel value) throws Exception {\n                        return 0L;\n                    }\n                })\n                .process(new KeyedProcessFunction<Long, SourceModel, SinkModel>() {\n\n                    private int i = 0;\n\n                    @Override\n                    public void processElement(SourceModel value, Context ctx, Collector<SinkModel> out)\n                            throws Exception {\n                        if (i == 0) {\n                            i++;\n                            System.out.println(1);\n                            ctx.timerService().registerEventTimeTimer(value.time + 1000);\n                            ctx.timerService().registerProcessingTimeTimer(value.time + 5000);\n                        } else {\n                            System.out.println(2);\n                        }\n                    }\n\n                    @Override\n                    public void onTimer(long timestamp, OnTimerContext ctx, Collector<SinkModel> out) throws Exception {\n\n                        System.out.println(ctx.timeDomain());\n\n                    }\n                })\n                .print();\n\n        flinkEnv.env().execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String page;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class MiddleModel {\n        private long uv;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private long uv;\n        private long time;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_07_lambda_error/LambdaErrorTest.java",
    "content": "package flink.examples.datastream._07_lambda_error;\n\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class LambdaErrorTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<SourceModel>() {\n\n                    private volatile boolean isCancel = false;\n\n                    private SinkModel s;\n\n                    @Override\n                    public void run(SourceContext<SourceModel> ctx) throws Exception {\n                        while (!isCancel) {\n                            // xxx 日志上报逻辑\n                            ctx.collect(\n                                    SourceModel\n                                            .builder()\n                                            .page(\"Shopping-Cart\")\n                                            .userId(1)\n                                            .time(System.currentTimeMillis())\n                                            .build()\n                            );\n                            Thread.sleep(100);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .print();\n\n        flinkEnv.env().execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String page;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class MiddleModel {\n        private long uv;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private long uv;\n        private long time;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_08_late_record/LatenessTest.java",
    "content": "package flink.examples.datastream._08_late_record;\n\nimport org.apache.flink.api.common.functions.FlatMapFunction;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.streaming.api.windowing.windows.TimeWindow;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class LatenessTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<SourceModel>() {\n\n                    private volatile boolean isCancel = false;\n\n                    private SinkModel s;\n\n                    @Override\n                    public void run(SourceContext<SourceModel> ctx) throws Exception {\n                        while (!isCancel) {\n                            // xxx 日志上报逻辑\n                            ctx.collect(\n                                    SourceModel\n                                            .builder()\n                                            .page(\"Shopping-Cart\")\n                                            .userId(1)\n                                            .time(System.currentTimeMillis())\n                                            .build()\n                            );\n                            Thread.sleep(100);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.minutes(1)) {\n                    @Override\n                    public long extractTimestamp(SourceModel element) {\n                        return element.getTime();\n                    }\n                })\n                .flatMap(new FlatMapFunction<SourceModel, MiddleModel>() {\n\n                    private Collector<MiddleModel> out1;\n\n                    @Override\n                    public void flatMap(SourceModel value, Collector<MiddleModel> out) throws Exception {\n                        for (int i = 0; i < 3; i++) {\n\n                            if (out1 == null) {\n                                this.out1 = out;\n                            }\n\n                            out.collect(\n                                    MiddleModel\n                                            .builder()\n                                            .uv(1L)\n                                            .time(System.currentTimeMillis())\n                                            .build()\n                            );\n                        }\n                    }\n                })\n                .keyBy(new KeySelector<MiddleModel, Integer>() {\n                    @Override\n                    public Integer getKey(MiddleModel value) throws Exception {\n                        return 0;\n                    }\n                })\n                .timeWindow(Time.seconds(10))\n                .process(new ProcessWindowFunction<MiddleModel, SinkModel, Integer, TimeWindow>() {\n                    @Override\n                    public void process(Integer integer, Context context, Iterable<MiddleModel> elements,\n                            Collector<SinkModel> out) throws Exception {\n                        System.out.println(1L);\n                    }\n                })\n                .print();\n\n        flinkEnv.env().execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String page;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class MiddleModel {\n        private long uv;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private long uv;\n        private long time;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_09_join/_01_window_join/_01_Window_Join_Test.java",
    "content": "//package flink.examples.datastream._09_join._01_window_join;\n//\n//import org.apache.flink.api.common.functions.FlatJoinFunction;\n//import org.apache.flink.api.common.functions.JoinFunction;\n//import org.apache.flink.api.java.functions.KeySelector;\n//import org.apache.flink.streaming.api.functions.source.SourceFunction;\n//import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;\n//import org.apache.flink.streaming.api.windowing.time.Time;\n//import org.apache.flink.util.Collector;\n//\n//import flink.examples.FlinkEnvUtils;\n//import flink.examples.FlinkEnvUtils.FlinkEnv;\n//\n//\n//public class _01_Window_Join_Test {\n//\n//    public static void main(String[] args) throws Exception {\n//\n//        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n//\n//        flinkEnv.env().setParallelism(1);\n//\n//        flinkEnv.env()\n//                .addSource(new SourceFunction<Object>() {\n//                    @Override\n//                    public void run(SourceContext<Object> ctx) throws Exception {\n//\n//                    }\n//\n//                    @Override\n//                    public void cancel() {\n//\n//                    }\n//                })\n//                .join(flinkEnv.env().addSource(new SourceFunction<Object>() {\n//                    @Override\n//                    public void run(SourceContext<Object> ctx) throws Exception {\n//\n//                    }\n//\n//                    @Override\n//                    public void cancel() {\n//\n//                    }\n//                }))\n//                .where(new KeySelector<Object, Object>() {\n//                    @Override\n//                    public Object getKey(Object value) throws Exception {\n//                        return null;\n//                    }\n//                })\n//                .equalTo(new KeySelector<Object, Object>() {\n//                    @Override\n//                    public Object getKey(Object value) throws Exception {\n//                        return null;\n//                    }\n//                })\n//                .window(TumblingEventTimeWindows.of(Time.seconds(60)))\n//                .apply(new FlatJoinFunction<Object, Object, Object>() {\n//                    @Override\n//                    public void join(Object first, Object second, Collector<Object> out) throws Exception {\n//\n//                    }\n//                })\n//                .apply(new JoinFunction<Object, Object, Object>() {\n//                    @Override\n//                    public Object join(Object first, Object second) throws Exception {\n//                        return null;\n//                    }\n//                });\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_09_join/_02_connect/_01_Connect_Test.java",
    "content": "package flink.examples.datastream._09_join._02_connect;\n\nimport org.apache.flink.api.common.state.MapState;\nimport org.apache.flink.api.common.state.MapStateDescriptor;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.co.KeyedCoProcessFunction;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _01_Connect_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<Object>() {\n                    @Override\n                    public void run(SourceContext<Object> ctx) throws Exception {\n\n                    }\n\n                    @Override\n                    public void cancel() {\n\n                    }\n                })\n                .keyBy(new KeySelector<Object, Object>() {\n                    @Override\n                    public Object getKey(Object value) throws Exception {\n                        return null;\n                    }\n                })\n                .connect(flinkEnv.env().addSource(new SourceFunction<Object>() {\n                    @Override\n                    public void run(SourceContext<Object> ctx) throws Exception {\n\n                    }\n\n                    @Override\n                    public void cancel() {\n\n                    }\n                }).keyBy(new KeySelector<Object, Object>() {\n                    @Override\n                    public Object getKey(Object value) throws Exception {\n                        return null;\n                    }\n                }))\n                .process(new KeyedCoProcessFunction<Object, Object, Object, Object>() {\n\n                    private transient MapState<String, String> mapState;\n\n                    @Override\n                    public void open(Configuration parameters) throws Exception {\n                        super.open(parameters);\n\n                        this.mapState = getRuntimeContext().getMapState(new MapStateDescriptor<String, String>(\"a\", String.class, String.class));\n                    }\n\n                    @Override\n                    public void processElement1(Object value, Context ctx, Collector<Object> out) throws Exception {\n\n                    }\n\n                    @Override\n                    public void processElement2(Object value, Context ctx, Collector<Object> out) throws Exception {\n\n                    }\n                })\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/datastream/_10_agg/AggTest.java",
    "content": "package flink.examples.datastream._10_agg;\n\nimport org.apache.flink.api.common.functions.AggregateFunction;\nimport org.apache.flink.api.java.functions.KeySelector;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.Builder;\nimport lombok.Data;\n\n\npublic class AggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.env()\n                .addSource(new SourceFunction<SourceModel>() {\n\n                    private volatile boolean isCancel = false;\n\n                    @Override\n                    public void run(SourceContext<SourceModel> ctx) throws Exception {\n                        while (!isCancel) {\n                            // xxx 日志上报逻辑\n                            ctx.collect(\n                                    SourceModel\n                                            .builder()\n                                            .page(\"Shopping-Cart\")\n                                            .userId(1)\n                                            .time(System.currentTimeMillis())\n                                            .build()\n                            );\n                            Thread.sleep(1000);\n                        }\n                    }\n\n                    @Override\n                    public void cancel() {\n                        this.isCancel = true;\n                    }\n                })\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<SourceModel>(Time.seconds(1)) {\n                    @Override\n                    public long extractTimestamp(SourceModel element) {\n                        return element.getTime();\n                    }\n                })\n                .keyBy(new KeySelector<SourceModel, Long>() {\n                    @Override\n                    public Long getKey(SourceModel value) throws Exception {\n                        return 0L;\n                    }\n                })\n                .timeWindow(Time.seconds(3))\n                .aggregate(new AggregateFunction<SourceModel, SourceModel, SourceModel>() {\n                    @Override\n                    public SourceModel createAccumulator() {\n                        return SourceModel.builder().build();\n                    }\n\n                    @Override\n                    public SourceModel add(SourceModel sourceModel, SourceModel sourceModel2) {\n                        return sourceModel;\n                    }\n\n                    @Override\n                    public SourceModel getResult(SourceModel sourceModel) {\n                        return sourceModel;\n                    }\n\n                    @Override\n                    public SourceModel merge(SourceModel sourceModel, SourceModel acc1) {\n                        return null;\n                    }\n                })\n                .print();\n\n        flinkEnv.env().execute();\n    }\n\n    @Data\n    @Builder\n    private static class SourceModel {\n        private long userId;\n        private String page;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class MiddleModel {\n        private long uv;\n        private long time;\n    }\n\n    @Data\n    @Builder\n    private static class SinkModel {\n        private long uv;\n        private long time;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/practice/_01/dau/_01_DataStream_Session_Window.java",
    "content": "package flink.examples.practice._01.dau;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\npublic class _01_DataStream_Session_Window {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"table.local-time-zone\", \"GMT+08:00\");\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                flinkEnv.env().fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627254000000L), // 北京时间：2021-07-26 07:00:00\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n                        .assignTimestampsAndWatermarks(\n                                new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n                                    @Override\n                                    public long extractTimestamp(Tuple3<String, Long, Long> element) {\n                                        return element.f2;\n                                    }\n                                });\n\n        flinkEnv.streamTEnv().registerFunction(\"mod\", new Mod_UDF());\n\n        flinkEnv.streamTEnv().registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        flinkEnv.streamTEnv().createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp, rowtime.rowtime\");\n\n        String sql = \"SELECT\\n\"\n                + \"  count(1),\\n\"\n                + \"  cast(tumble_start(rowtime, INTERVAL '1' DAY) as string)\\n\"\n                + \"FROM\\n\"\n                + \"  source_db.source_table\\n\"\n                + \"GROUP BY\\n\"\n                + \"  tumble(rowtime, INTERVAL '1' DAY)\";\n\n        Table result = flinkEnv.streamTEnv().sqlQuery(sql);\n\n        flinkEnv.streamTEnv().toAppendStream(result, Row.class).print();\n\n        flinkEnv.env().execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/question/datastream/_01/kryo_protobuf_no_more_bytes_left/KryoProtobufNoMoreBytesLeftTest.java",
    "content": "package flink.examples.question.datastream._01.kryo_protobuf_no_more_bytes_left;\n\nimport java.lang.reflect.Method;\n\nimport com.esotericsoftware.kryo.Kryo;\nimport com.esotericsoftware.kryo.io.Input;\nimport com.esotericsoftware.kryo.io.Output;\nimport com.google.protobuf.Message;\nimport com.sun.tools.javac.util.Assert;\nimport com.twitter.chill.protobuf.ProtobufSerializer;\n\nimport flink.examples.datastream._04.keyed_co_process.protobuf.Source;\n\npublic class KryoProtobufNoMoreBytesLeftTest {\n\n    public static void main(String[] args) throws Exception {\n\n        Source source = Source\n                .newBuilder()\n                .build();\n\n        byte[] bytes = source.toByteArray();\n\n        byte[] buffer = new byte[300];\n\n        Kryo kryo = newKryo();\n\n        Output output = new Output(buffer);\n\n        // ser\n\n        ProtobufSerializer protobufSerializer = new ProtobufSerializer();\n\n        protobufSerializer.write(kryo, output, source);\n\n\n        // deser\n\n        Input input = new Input(buffer);\n\n        Class<?> c = (Class<?>) Source.getDefaultInstance().getClass();\n\n        Message m = protobufSerializer.read(kryo, input, (Class<Message>) c);\n\n        testGetParse();\n\n    }\n\n    private static void testGetParse() throws Exception {\n\n        ProtobufSerializerV2 protobufSerializerV2 = new ProtobufSerializerV2();\n\n        Method m = protobufSerializerV2.getParse(Source.class);\n\n\n        Source s = (Source) m.invoke(null, Source.newBuilder().setName(\"antigeneral\").build().toByteArray());\n\n        Assert.check(\"antigeneral\".equals(s.getName()));\n    }\n\n    private static class ProtobufSerializerV2 extends ProtobufSerializer {\n        @Override\n        public Method getParse(Class cls) throws Exception {\n            return super.getParse(cls);\n        }\n    }\n\n    private static Kryo newKryo() {\n        Kryo kryo = new Kryo();\n\n        kryo.addDefaultSerializer(Source.class, ProtobufSerializerV2.class);\n\n        return kryo;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/question/sql/_01/lots_source_fields_poor_performance/EmbeddedKafka.java",
    "content": "//package flink.examples.question.sql._01.lots_source_fields_poor_performance;\n//\n//import static net.mguenther.kafka.junit.ObserveKeyValues.on;\n//import static net.mguenther.kafka.junit.SendValues.to;\n//\n//import lombok.SneakyThrows;\n//import net.mguenther.kafka.junit.EmbeddedKafkaCluster;\n//import net.mguenther.kafka.junit.EmbeddedKafkaClusterConfig;\n//\n//public class EmbeddedKafka {\n//\n//    public static void main(String[] args) {\n//        EmbeddedKafkaCluster kafkaCluster =\n//                EmbeddedKafkaCluster.provisionWith(EmbeddedKafkaClusterConfig.defaultClusterConfig());\n//        kafkaCluster.start();\n//\n//        new Thread(new Runnable() {\n//            @SneakyThrows\n//            @Override\n//            public void run() {\n//                while (true) {\n//                    kafkaCluster.send(to(\"test-topic\", \"a\", \"b\", \"c\"));\n//                    Thread.sleep(1000);\n//                }\n//            }\n//        }).start();\n//\n//\n//        new Thread(new Runnable() {\n//            @SneakyThrows\n//            @Override\n//            public void run() {\n//                while (true) {\n//                    kafkaCluster.observe(on(\"test-topic\", 3))\n//                            .forEach(a -> System.out.println(a.getValue()));\n//                }\n//            }\n//        }).start();\n//\n//\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/question/sql/_01/lots_source_fields_poor_performance/_01_DataGenSourceTest.java",
    "content": "package flink.examples.question.sql._01.lots_source_fields_poor_performance;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class _01_DataGenSourceTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 TUMBLE WINDOW 案例\");\n\n        tEnv.getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n\n        String originalSql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    user_id1 BIGINT,\\n\"\n                + \"    name1 STRING,\\n\"\n                + \"    user_id2 BIGINT,\\n\"\n                + \"    name2 STRING,\\n\"\n                + \"    user_id3 BIGINT,\\n\"\n                + \"    name3 STRING,\\n\"\n                + \"    user_id4 BIGINT,\\n\"\n                + \"    name4 STRING,\\n\"\n                + \"    user_id5 BIGINT,\\n\"\n                + \"    name5 STRING,\\n\"\n                + \"    user_id6 BIGINT,\\n\"\n                + \"    name6 STRING,\\n\"\n                + \"    user_id7 BIGINT,\\n\"\n                + \"    name7 STRING,\\n\"\n                + \"    user_id8 BIGINT,\\n\"\n                + \"    name8 STRING,\\n\"\n                + \"    user_id9 BIGINT,\\n\"\n                + \"    name9 STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/question/sql/_01/lots_source_fields_poor_performance/_01_JsonSourceTest.java",
    "content": "package flink.examples.question.sql._01.lots_source_fields_poor_performance;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.table.data.RowData;\n\nimport com.google.common.collect.ImmutableMap;\n\nimport flink.examples.JacksonUtils;\n\n\npublic class _01_JsonSourceTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 TUMBLE WINDOW 案例\");\n\n        tEnv.getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n\n        String originalSql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'class.name' = 'flink.examples.question.sql._01.lots_source_fields_poor_performance._01_JsonSourceTest$UserDefineSource1',\\n\"\n                + \"  'format' = 'json'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    user_id1 BIGINT,\\n\"\n                + \"    name1 STRING,\\n\"\n                + \"    user_id2 BIGINT,\\n\"\n                + \"    name2 STRING,\\n\"\n                + \"    user_id3 BIGINT,\\n\"\n                + \"    name3 STRING,\\n\"\n                + \"    user_id4 BIGINT,\\n\"\n                + \"    name4 STRING,\\n\"\n                + \"    user_id5 BIGINT,\\n\"\n                + \"    name5 STRING,\\n\"\n                + \"    user_id6 BIGINT,\\n\"\n                + \"    name6 STRING,\\n\"\n                + \"    user_id7 BIGINT,\\n\"\n                + \"    name7 STRING,\\n\"\n                + \"    user_id8 BIGINT,\\n\"\n                + \"    name8 STRING,\\n\"\n                + \"    user_id9 BIGINT,\\n\"\n                + \"    name9 STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        Arrays.stream(originalSql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n    public static class UserDefineSource1 extends RichSourceFunction<RowData> {\n\n        private DeserializationSchema<RowData> dser;\n\n        private volatile boolean isCancel;\n\n        public UserDefineSource1(DeserializationSchema<RowData> dser) {\n            this.dser = dser;\n        }\n\n        @Override\n        public void run(SourceContext<RowData> ctx) throws Exception {\n            while (!this.isCancel) {\n                ctx.collect(this.dser.deserialize(\n                        JacksonUtils.bean2Json(ImmutableMap.of(\"user_id\", 1111L\n                                , \"name\", \"antigeneral\"\n                                , \"server_timestamp\", System.currentTimeMillis())\n                        ).getBytes()\n                ));\n                Thread.sleep(1000);\n            }\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_01/future/CompletableFutureTest.java",
    "content": "package flink.examples.runtime._01.future;\n\nimport java.util.concurrent.CompletableFuture;\n\n\npublic class CompletableFutureTest {\n\n    public static void main(String[] args) throws Exception {\n        // 创建异步执行任务:\n        CompletableFuture<Double> cf = CompletableFuture.supplyAsync(CompletableFutureTest::fetchPrice);\n        // 如果执行成功:\n        cf.thenAccept((result) -> {\n            System.out.println(\"price: \" + result);\n        });\n        // 如果执行异常:\n        cf.exceptionally((e) -> {\n            e.printStackTrace();\n            return null;\n        });\n        // 主线程不要立刻结束，否则CompletableFuture默认使用的线程池会立刻关闭:\n        Thread.sleep(200);\n    }\n\n    static Double fetchPrice() {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        if (false) {\n            throw new RuntimeException(\"fetch price failed!\");\n        }\n        return 5 + Math.random() * 20;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_01/future/CompletableFutureTest4.java",
    "content": "package flink.examples.runtime._01.future;\n\nimport java.util.concurrent.CompletableFuture;\n\n\npublic class CompletableFutureTest4 {\n\n    public static void main(String[] args) throws Exception {\n        // 第一个任务:\n        CompletableFuture<String> cfQuery = CompletableFuture.supplyAsync(() -> {\n            return queryCode(\"中国石油\");\n        });\n        // cfQuery成功后继续执行下一个任务:\n        CompletableFuture<String> cfFetch = cfQuery.thenApplyAsync((code) -> {\n            return fetchPrice(code);\n        });\n        // cfFetch成功后打印结果:\n        cfFetch.thenAccept((result) -> {\n            System.out.println(\"price: \" + result);\n        });\n        // 主线程不要立刻结束，否则CompletableFuture默认使用的线程池会立刻关闭:\n        Thread.sleep(2000);\n    }\n\n    static String queryCode(String name) {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        return name;\n    }\n\n    static String fetchPrice(String code) {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        return code + \"：\" + 5 + Math.random() * 20;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_01/future/CompletableFuture_AnyOf_Test3.java",
    "content": "package flink.examples.runtime._01.future;\n\nimport java.util.concurrent.CompletableFuture;\n\n\npublic class CompletableFuture_AnyOf_Test3 {\n\n    public static void main(String[] args) throws Exception {\n        // 两个CompletableFuture执行异步查询:\n        CompletableFuture<String> cfQueryFromSina = CompletableFuture.supplyAsync(() -> {\n            return queryCode(\"中国石油\", \"https://finance.sina.com.cn/code/\");\n        });\n        CompletableFuture<String> cfQueryFrom163 = CompletableFuture.supplyAsync(() -> {\n            return queryCode(\"中国石油\", \"https://money.163.com/code/\");\n        });\n\n        // 用anyOf合并为一个新的CompletableFuture:\n        CompletableFuture<Object> cfQuery = CompletableFuture.anyOf(cfQueryFromSina, cfQueryFrom163);\n\n        // 两个CompletableFuture执行异步查询:\n        CompletableFuture<Double> cfFetchFromSina = cfQuery.thenApplyAsync((code) -> {\n            return fetchPrice((String) code, \"https://finance.sina.com.cn/price/\");\n        });\n        CompletableFuture<Double> cfFetchFrom163 = cfQuery.thenApplyAsync((code) -> {\n            return fetchPrice((String) code, \"https://money.163.com/price/\");\n        });\n\n        // 用anyOf合并为一个新的CompletableFuture:\n        CompletableFuture<Object> cfFetch = CompletableFuture.anyOf(cfFetchFromSina, cfFetchFrom163);\n\n        // 最终结果:\n        cfFetch.thenAccept((result) -> {\n            System.out.println(\"price: \" + result);\n        });\n        // 主线程不要立刻结束，否则CompletableFuture默认使用的线程池会立刻关闭:\n        Thread.sleep(200);\n    }\n\n    static String queryCode(String name, String url) {\n        System.out.println(\"query code from \" + url + \"...\");\n        try {\n            Thread.sleep((long) (Math.random() * 100));\n        } catch (InterruptedException e) {\n        }\n        return \"601857\";\n    }\n\n    static Double fetchPrice(String code, String url) {\n        System.out.println(\"query price from \" + url + \"...\");\n        try {\n            Thread.sleep((long) (Math.random() * 100));\n        } catch (InterruptedException e) {\n        }\n        return 5 + Math.random() * 20;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_01/future/CompletableFuture_ThenApplyAsync_Test2.java",
    "content": "package flink.examples.runtime._01.future;\n\nimport java.util.concurrent.CompletableFuture;\n\n\npublic class CompletableFuture_ThenApplyAsync_Test2 {\n\n    public static void main(String[] args) throws Exception {\n        // 第一个任务:\n        CompletableFuture<String> cfQuery = CompletableFuture.supplyAsync(() -> {\n            return queryCode(\"中国石油\");\n        });\n        // cfQuery成功后继续执行下一个任务:\n        CompletableFuture<String> cfFetch = cfQuery.thenApplyAsync((code) -> {\n            return fetchPrice(code);\n        });\n        // cfFetch成功后打印结果:\n        cfFetch.thenAccept((result) -> {\n            System.out.println(\"price: \" + result);\n        });\n        // 主线程不要立刻结束，否则CompletableFuture默认使用的线程池会立刻关闭:\n        Thread.sleep(2000);\n    }\n\n    static String queryCode(String name) {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        return name;\n    }\n\n    static String fetchPrice(String code) {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        return code + \"：\" + 5 + Math.random() * 20;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_01/future/CompletableFuture_ThenComposeAsync_Test2.java",
    "content": "package flink.examples.runtime._01.future;\n\nimport java.util.concurrent.CompletableFuture;\n\n\npublic class CompletableFuture_ThenComposeAsync_Test2 {\n\n    public static void main(String[] args) throws Exception {\n        // 第一个任务:\n        CompletableFuture<String> cfQuery = CompletableFuture.supplyAsync(() -> {\n            return queryCode(\"中国石油\");\n        });\n        // cfQuery成功后继续执行下一个任务:\n        CompletableFuture<String> cfFetch = cfQuery.thenComposeAsync((code) -> {\n            return CompletableFuture.supplyAsync(() -> fetchPrice(code));\n        });\n        // cfFetch成功后打印结果:\n        cfFetch.thenAccept((result) -> {\n            System.out.println(\"price: \" + result);\n        });\n        // 主线程不要立刻结束，否则CompletableFuture默认使用的线程池会立刻关闭:\n        Thread.sleep(2000);\n    }\n\n    static String queryCode(String name) {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        return name;\n    }\n\n    static String fetchPrice(String code) {\n        try {\n            Thread.sleep(100);\n        } catch (InterruptedException e) {\n        }\n        return code + \"：\" + 5 + Math.random() * 20;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_01/future/FutureTest.java",
    "content": "package flink.examples.runtime._01.future;\n\nimport java.util.concurrent.Callable;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\nimport java.util.concurrent.TimeoutException;\n\n\npublic class FutureTest {\n\n    public static void main(String[] args) throws ExecutionException, InterruptedException, TimeoutException {\n\n        ExecutorService executor = Executors.newFixedThreadPool(4);\n        // 定义任务:\n        Callable<String> task = new Task();\n        // 提交任务并获得Future:\n        Future<String> future = executor.submit(task);\n\n        // 从Future获取异步执行返回的结果:\n\n        String result = future.get();\n        System.out.println(result);\n        executor.shutdown();\n\n    }\n\n    private static class Task implements Callable<String> {\n        public String call() throws Exception {\n            Thread.sleep(1000);\n            return \"1\";\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/runtime/_04/statebackend/CancelAndRestoreWithCheckpointTest.java",
    "content": "package flink.examples.runtime._04.statebackend;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.contrib.streaming.state.PredefinedOptions;\nimport org.apache.flink.contrib.streaming.state.RocksDBStateBackend;\nimport org.apache.flink.runtime.state.StateBackend;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class CancelAndRestoreWithCheckpointTest {\n\n    private static final boolean ENABLE_INCREMENTAL_CHECKPOINT = true;\n    private static final int NUMBER_OF_TRANSFER_THREADS = 3;\n\n    public static void main(String[] args) throws Exception {\n\n        Configuration configuration = new Configuration();\n\n        configuration.setString(\"execution.savepoint.path\", \"file:///Users/flink/checkpoints/ce2e1969c5088bf27daf35d4907659fd/chk-5\");\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(configuration);\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        // ck 设置\n        env.getCheckpointConfig().setCheckpointTimeout(TimeUnit.MINUTES.toMillis(3));\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n\n        env.configure(configuration, Thread.currentThread().getContextClassLoader());\n\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        // 状态后端设置\n        // 设置存储文件位置为 file:///Users/flink/checkpoints\n        RocksDBStateBackend rocksDBStateBackend = new RocksDBStateBackend(\n                \"file:///Users/flink/checkpoints\", ENABLE_INCREMENTAL_CHECKPOINT);\n        rocksDBStateBackend.setNumberOfTransferThreads(NUMBER_OF_TRANSFER_THREADS);\n        rocksDBStateBackend.setPredefinedOptions(PredefinedOptions.SPINNING_DISK_OPTIMIZED_HIGH_MEM);\n        env.setStateBackend((StateBackend) rocksDBStateBackend);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"table.exec.emit.early-fire.enabled\", \"true\");\n        tEnv.getConfig().getConfiguration().setString(\"table.exec.emit.early-fire.delay\", \"60 s\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim BIGINT,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.dim.min' = '1',\\n\"\n                + \"  'fields.dim.max' = '2',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    dim BIGINT,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"       sum(bucket_pv) as pv,\\n\"\n                + \"       sum(bucket_sum_price) as sum_price,\\n\"\n                + \"       max(bucket_max_price) as max_price,\\n\"\n                + \"       min(bucket_min_price) as min_price,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_start) as window_start\\n\"\n                + \"from (\\n\"\n                + \"     select dim,\\n\"\n                + \"            count(*) as bucket_pv,\\n\"\n                + \"            sum(price) as bucket_sum_price,\\n\"\n                + \"            max(price) as bucket_max_price,\\n\"\n                + \"            min(price) as bucket_min_price,\\n\"\n                + \"            count(distinct user_id) as bucket_uv,\\n\"\n                + \"            UNIX_TIMESTAMP(CAST(tumble_start(row_time, interval '1' DAY) AS STRING)) * 1000 as window_start\\n\"\n                + \"     from source_table\\n\"\n                + \"     group by\\n\"\n                + \"            mod(user_id, 1024),\\n\"\n                + \"            dim,\\n\"\n                + \"            tumble(row_time, interval '1' DAY)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"         window_start\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW EARLY FIRE 案例\");\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror/CountDistinctErrorTest.java",
    "content": "package flink.examples.sql._01.countdistincterror;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\n\npublic class CountDistinctErrorTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                env.fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L)));\n\n        tEnv.registerFunction(\"mod\", new Mod_UDF());\n\n        tEnv.registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        tEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp\");\n\n        String sql = \"WITH detail_tmp AS (\\n\"\n                + \"  SELECT\\n\"\n                + \"    status,\\n\"\n                + \"    id,\\n\"\n                + \"    `timestamp`\\n\"\n                + \"  FROM\\n\"\n                + \"    (\\n\"\n                + \"      SELECT\\n\"\n                + \"        status,\\n\"\n                + \"        id,\\n\"\n                + \"        `timestamp`,\\n\"\n                + \"        row_number() over(\\n\"\n                + \"          PARTITION by id\\n\"\n                + \"          ORDER BY\\n\"\n                + \"            `timestamp` DESC\\n\"\n                + \"        ) AS rn\\n\"\n                + \"      FROM\\n\"\n                + \"        (\\n\"\n                + \"          SELECT\\n\"\n                + \"            status,\\n\"\n                + \"            id,\\n\"\n                + \"            `timestamp`\\n\"\n                + \"          FROM\\n\"\n                + \"            source_db.source_table\\n\"\n                + \"        ) t1\\n\"\n                + \"    ) t2\\n\"\n                + \"  WHERE\\n\"\n                + \"    rn = 1\\n\"\n                + \")\\n\"\n                + \"SELECT\\n\"\n                + \"  DIM.status_new as status,\\n\"\n                + \"  part_uv as uv\\n\"\n                + \"FROM\\n\"\n                + \"  (\\n\"\n                + \"    SELECT\\n\"\n                + \"      status,\\n\"\n                + \"      count(id) as part_uv\\n\"\n                + \"    FROM\\n\"\n                + \"      detail_tmp\\n\"\n                + \"    GROUP BY\\n\"\n                + \"      status,\\n\"\n                + \"      mod(id, 100)\\n\"\n                + \"  )\\n\"\n                + \"LEFT JOIN LATERAL TABLE(status_mapper(status)) AS DIM(status_new) ON TRUE\\n\";\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toRetractStream(result, Row.class).print();\n\n        env.execute();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror/CountDistinctErrorTest2.java",
    "content": "package flink.examples.sql._01.countdistincterror;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\n\npublic class CountDistinctErrorTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                env.fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627218000000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L)));\n\n        tEnv.registerFunction(\"mod\", new Mod_UDF());\n\n        tEnv.registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        tEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp\");\n\n        String sql = \"WITH detail_tmp AS (\\n\"\n                + \"  SELECT\\n\"\n                + \"    status,\\n\"\n                + \"    id,\\n\"\n                + \"    `timestamp`\\n\"\n                + \"  FROM\\n\"\n                + \"    (\\n\"\n                + \"      SELECT\\n\"\n                + \"        status,\\n\"\n                + \"        id,\\n\"\n                + \"        `timestamp`,\\n\"\n                + \"        row_number() over(\\n\"\n                + \"          PARTITION by id\\n\"\n                + \"          ORDER BY\\n\"\n                + \"            `timestamp` DESC\\n\"\n                + \"        ) AS rn\\n\"\n                + \"      FROM\\n\"\n                + \"        (\\n\"\n                + \"          SELECT\\n\"\n                + \"            status,\\n\"\n                + \"            id,\\n\"\n                + \"            `timestamp`\\n\"\n                + \"          FROM\\n\"\n                + \"            source_db.source_table\\n\"\n                + \"        ) t1\\n\"\n                + \"    ) t2\\n\"\n                + \"  WHERE\\n\"\n                + \"    rn = 1\\n\"\n                + \")\\n\"\n                + \"SELECT\\n\"\n                + \"  DIM.status_new as status,\\n\"\n                + \"  sum(part_uv) as uv\\n\"\n                + \"FROM\\n\"\n                + \"  (\\n\"\n                + \"    SELECT\\n\"\n                + \"      status,\\n\"\n                + \"      count(distinct id) as part_uv\\n\"\n                + \"    FROM\\n\"\n                + \"      detail_tmp\\n\"\n                + \"    GROUP BY\\n\"\n                + \"      status,\\n\"\n                + \"      mod(id, 100)\\n\"\n                + \"  )\\n\"\n                + \"LEFT JOIN LATERAL TABLE(status_mapper(status)) AS DIM(status_new) ON TRUE\\n\"\n                + \"GROUP BY\\n\"\n                + \"  DIM.status_new\";\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toRetractStream(result, Row.class).print();\n\n        String s = env.getExecutionPlan();\n\n        env.execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror/CountDistinctErrorTest3.java",
    "content": "package flink.examples.sql._01.countdistincterror;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper1_UDF;\n\n\npublic class CountDistinctErrorTest3 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                env.fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627218000000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L)));\n\n        tEnv.registerFunction(\"mod\", new Mod_UDF());\n\n        tEnv.registerFunction(\"status_mapper\", new StatusMapper1_UDF());\n\n        tEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp\");\n\n        String sql = \"WITH detail_tmp AS (\\n\"\n                + \"  SELECT\\n\"\n                + \"    status,\\n\"\n                + \"    id,\\n\"\n                + \"    `timestamp`\\n\"\n                + \"  FROM\\n\"\n                + \"    (\\n\"\n                + \"      SELECT\\n\"\n                + \"        status,\\n\"\n                + \"        id,\\n\"\n                + \"        `timestamp`,\\n\"\n                + \"        row_number() over(\\n\"\n                + \"          PARTITION by id\\n\"\n                + \"          ORDER BY\\n\"\n                + \"            `timestamp` DESC\\n\"\n                + \"        ) AS rn\\n\"\n                + \"      FROM\\n\"\n                + \"        (\\n\"\n                + \"          SELECT\\n\"\n                + \"            status,\\n\"\n                + \"            id,\\n\"\n                + \"            `timestamp`\\n\"\n                + \"          FROM\\n\"\n                + \"            source_db.source_table\\n\"\n                + \"        ) t1\\n\"\n                + \"    ) t2\\n\"\n                + \"  WHERE\\n\"\n                + \"    rn = 1\\n\"\n                + \")\\n\"\n                + \"SELECT\\n\"\n                + \"  status_mapper(status) as status,\\n\"\n                + \"  sum(part_uv) as uv\\n\"\n                + \"FROM\\n\"\n                + \"  (\\n\"\n                + \"    SELECT\\n\"\n                + \"      status,\\n\"\n                + \"      count(distinct id) as part_uv\\n\"\n                + \"    FROM\\n\"\n                + \"      detail_tmp\\n\"\n                + \"    GROUP BY\\n\"\n                + \"      status,\\n\"\n                + \"      mod(id, 100)\\n\"\n                + \"  )\\n\"\n                + \"GROUP BY\\n\"\n                + \"  status\";\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toRetractStream(result, Row.class).print();\n\n        String s = env.getExecutionPlan();\n\n        env.execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror/udf/Mod_UDF.java",
    "content": "package flink.examples.sql._01.countdistincterror.udf;\n\nimport org.apache.flink.table.functions.ScalarFunction;\n\n\npublic class Mod_UDF extends ScalarFunction {\n\n    public int eval(long id, int remainder) {\n        return (int) (id % remainder);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror/udf/StatusMapper1_UDF.java",
    "content": "package flink.examples.sql._01.countdistincterror.udf;\n\nimport org.apache.flink.table.functions.ScalarFunction;\n\n\npublic class StatusMapper1_UDF extends ScalarFunction {\n\n    private int i = 0;\n\n    public String eval(String status) {\n\n        if (i == 5) {\n            i++;\n            return \"等级4\";\n        } else {\n            i++;\n            if (\"1\".equals(status)) {\n                return \"等级1\";\n            } else if (\"2\".equals(status)) {\n                return \"等级2\";\n            } else if (\"3\".equals(status)) {\n                return \"等级3\";\n            }\n        }\n        return \"未知\";\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_01/countdistincterror/udf/StatusMapper_UDF.java",
    "content": "package flink.examples.sql._01.countdistincterror.udf;\n\nimport org.apache.flink.table.functions.TableFunction;\n\n\npublic class StatusMapper_UDF extends TableFunction<String> {\n\n    private int i = 0;\n\n    public void eval(String status) throws InterruptedException {\n\n        if (i == 6) {\n            Thread.sleep(2000L);\n        }\n\n        if (i == 5) {\n            collect(\"等级4\");\n        } else {\n            if (\"1\".equals(status)) {\n                collect(\"等级1\");\n            } else if (\"2\".equals(status)) {\n                collect(\"等级2\");\n            } else if (\"3\".equals(status)) {\n                collect(\"等级3\");\n            }\n        }\n        i++;\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_02/timezone/TimeZoneTest.java",
    "content": "package flink.examples.sql._02.timezone;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\n\npublic class TimeZoneTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"table.local-time-zone\", \"GMT+08:00\");\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                flinkEnv.env().fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627254000000L), // 北京时间：2021-07-26 07:00:00\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n                .assignTimestampsAndWatermarks(\n                        new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n                            @Override\n                            public long extractTimestamp(Tuple3<String, Long, Long> element) {\n                                return element.f2;\n                            }\n                        });\n\n        flinkEnv.streamTEnv().registerFunction(\"mod\", new Mod_UDF());\n\n        flinkEnv.streamTEnv().registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        flinkEnv.streamTEnv().createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp, rowtime.rowtime\");\n\n        String sql = \"SELECT\\n\"\n                + \"  count(1),\\n\"\n                + \"  cast(tumble_start(rowtime, INTERVAL '1' DAY) as string)\\n\"\n                + \"FROM\\n\"\n                + \"  source_db.source_table\\n\"\n                + \"GROUP BY\\n\"\n                + \"  tumble(rowtime, INTERVAL '1' DAY)\";\n\n        Table result = flinkEnv.streamTEnv().sqlQuery(sql);\n\n        flinkEnv.streamTEnv().toAppendStream(result, Row.class).print();\n\n        flinkEnv.env().execute();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_02/timezone/TimeZoneTest2.java",
    "content": "package flink.examples.sql._02.timezone;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableResult;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class TimeZoneTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                flinkEnv.env().fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627254000000L), // 北京时间：2021-07-26 07:00:00\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n                        .assignTimestampsAndWatermarks(\n                                new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n                                    @Override\n                                    public long extractTimestamp(Tuple3<String, Long, Long> element) {\n                                        return element.f2;\n                                    }\n                                });\n\n        flinkEnv.streamTEnv().registerFunction(\"mod\", new Mod_UDF());\n\n        flinkEnv.streamTEnv().registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        flinkEnv.streamTEnv().createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp, server_timestamp.rowtime\");\n\n        TableResult tableResult = flinkEnv\n                .streamTEnv()\n                .executeSql(\"DESC source_db.source_table\");\n\n        tableResult.print();\n\n        /**\n         * +------------------+------------------------+------+-----+--------+-----------+\n         * |             name |                   type | null | key | extras | watermark |\n         * +------------------+------------------------+------+-----+--------+-----------+\n         * |           status |                 STRING | true |     |        |           |\n         * |               id |                 BIGINT | true |     |        |           |\n         * |        timestamp |                 BIGINT | true |     |        |           |\n         * | server_timestamp | TIMESTAMP(3) *ROWTIME* | true |     |        |           |\n         * +------------------+------------------------+------+-----+--------+-----------+\n         */\n\n        String create_view_sql = \"CREATE TEMPORARY VIEW source_db.source_view AS \\n\"\n                + \"SELECT status, id, `timestamp`, cast(server_timestamp as TIMESTAMP_LTZ(3)) as rowtime FROM source_db.source_table\";\n\n        flinkEnv\n                .streamTEnv()\n                .executeSql(create_view_sql);\n\n        flinkEnv\n                .streamTEnv()\n                .executeSql(\"DESC source_db.source_view\")\n                .print();\n\n        /**\n         * +-----------+------------------+------+-----+--------+-----------+\n         * |      name |             type | null | key | extras | watermark |\n         * +-----------+------------------+------+-----+--------+-----------+\n         * |    status |           STRING | true |     |        |           |\n         * |        id |           BIGINT | true |     |        |           |\n         * | timestamp |           BIGINT | true |     |        |           |\n         * |   rowtime | TIMESTAMP_LTZ(3) | true |     |        |           |\n         * +-----------+------------------+------+-----+--------+-----------+\n         */\n\n        String sql = \"SELECT\\n\"\n                + \"  count(1),\\n\"\n                + \"  cast(tumble_start(rowtime, INTERVAL '1' DAY) as string)\\n\"\n                + \"FROM\\n\"\n                + \"  source_db.source_table\\n\"\n                + \"GROUP BY\\n\"\n                + \"  tumble(rowtime, INTERVAL '1' DAY)\";\n\n        /**\n         * +I[9, 2021-07-25 00:00:00.000]\n         * +I[1, 2021-07-26 00:00:00.000]\n         */\n\n        Table result = flinkEnv.streamTEnv().sqlQuery(sql);\n\n        flinkEnv.streamTEnv().toAppendStream(result, Row.class).print();\n\n        flinkEnv.env().execute();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_02/timezone/TimeZoneTest3.java",
    "content": "package flink.examples.sql._02.timezone;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TimeZoneTest3 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String exampleSql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp_LTZ(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end_timestamp bigint,\\n\"\n                + \"    window_start_timestamp bigint,\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end_timestamp, \\n\"\n                + \"      UNIX_TIMESTAMP(CAST(window_start AS STRING)) * 1000 as window_start_timestamp, \\n\"\n                + \"      window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '1' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        for (String innerSql : exampleSql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/CreateViewTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class CreateViewTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.table.user_defined.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"CREATE VIEW query_view as\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \";\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM query_view;\";\n\n        // 临时 VIEW\n        String TEMPORARY_VIEW_SQL = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.table.user_defined.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"CREATE TEMPORARY VIEW query_view as\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \";\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM query_view;\";\n\n        // 临时 Table\n        String TEMPORARY_TABLE_SQL = \"CREATE TEMPORARY TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.table.user_defined.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TEMPORARY TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"CREATE TEMPORARY VIEW query_view as\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \";\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM query_view;\";\n\n        Arrays.stream(TEMPORARY_TABLE_SQL.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/DataStreamSourceEventTimeTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\npublic class DataStreamSourceEventTimeTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        // 1. 分配 watermark\n        DataStream<Row> r = env.addSource(new UserDefinedSource())\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Row>(Time.minutes(0L)) {\n                    @Override\n                    public long extractTimestamp(Row element) {\n                        return (long) element.getField(\"f2\");\n                    }\n                });\n        // 2. 使用 f2.rowtime 的方式将 f2 字段指为事件时间时间戳\n        Table sourceTable = tEnv.fromDataStream(r, \"f0, f1, f2.rowtime\");\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable);\n\n        // 3. 在 tumble window 中使用 f2\n        String tumbleWindowSql =\n                \"SELECT TUMBLE_START(f2, INTERVAL '5' SECOND), COUNT(DISTINCT f0)\\n\"\n                + \"FROM source_table\\n\"\n                + \"GROUP BY TUMBLE(f2, INTERVAL '5' SECOND)\"\n                ;\n\n        Table resultTable = tEnv.sqlQuery(tumbleWindowSql);\n\n        tEnv.toDataStream(resultTable, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", System.currentTimeMillis()));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/DataStreamSourceProcessingTimeTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\npublic class DataStreamSourceProcessingTimeTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        // 1. 分配 watermark\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        // 2. 使用 proctime.proctime 的方式将 f2 字段指为处理时间时间戳\n        Table sourceTable = tEnv.fromDataStream(r, \"f0, f1, f2, proctime.proctime\");\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable);\n\n        // 3. 在 tumble window 中使用 f2\n        String tumbleWindowSql =\n                \"SELECT TUMBLE_START(proctime, INTERVAL '5' SECOND), COUNT(DISTINCT f0)\\n\"\n                + \"FROM source_table\\n\"\n                + \"GROUP BY TUMBLE(proctime, INTERVAL '5' SECOND)\"\n                ;\n\n        Table resultTable = tEnv.sqlQuery(tumbleWindowSql);\n\n        tEnv.toDataStream(resultTable, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", System.currentTimeMillis()));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/KafkaSourceTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class KafkaSourceTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.executeSql(\n                \"CREATE TABLE KafkaSourceTable (\\n\"\n                        + \"  `f0` STRING,\\n\"\n                        + \"  `f1` STRING\\n\"\n                        + \") WITH (\\n\"\n                        + \"  'connector' = 'kafka',\\n\"\n                        + \"  'topic' = 'topic',\\n\"\n                        + \"  'properties.bootstrap.servers' = 'localhost:9092',\\n\"\n                        + \"  'properties.group.id' = 'testGroup',\\n\"\n                        + \"  'format' = 'json'\\n\"\n                        + \")\"\n        );\n\n        Table t = tEnv.sqlQuery(\"SELECT * FROM KafkaSourceTable\");\n\n        tEnv.toAppendStream(t, Row.class).print();\n\n        env.execute();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/RedisLookupTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableResult;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class RedisLookupTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r, Schema.newBuilder()\n                .columnByExpression(\"proctime\", \"PROCTIME()\")\n                .build());\n\n        tEnv.createTemporaryView(\"leftTable\", sourceTable);\n\n        String sql = \"CREATE TABLE dimTable (\\n\"\n                + \"    name STRING,\\n\"\n                + \"    name1 STRING,\\n\"\n                + \"    score BIGINT\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'lookup.cache.max-rows' = '500',\\n\"\n                + \"  'lookup.cache.ttl' = '3600',\\n\"\n                + \"  'lookup.max-retries' = '1'\\n\"\n                + \")\";\n\n        String joinSql = \"SELECT o.f0, o.f1, c.name, c.name1, c.score\\n\"\n                + \"FROM leftTable AS o\\n\"\n                + \"LEFT JOIN dimTable FOR SYSTEM_TIME AS OF o.proctime AS c\\n\"\n                + \"ON o.f0 = c.name\";\n\n        TableResult dimTable = tEnv.executeSql(sql);\n\n        Table t = tEnv.sqlQuery(joinSql);\n\n        //        Table t = tEnv.sqlQuery(\"select * from leftTable\");\n\n        tEnv.toAppendStream(t, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\", \"b\", 1L));\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/RedisSinkTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class RedisSinkTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r, Schema.newBuilder()\n                .columnByExpression(\"proctime\", \"PROCTIME()\")\n                .build());\n\n        tEnv.createTemporaryView(\"leftTable\", sourceTable);\n\n        String sql = \"CREATE TABLE redis_sink_table (\\n\"\n                + \"    key STRING,\\n\"\n                + \"    `value` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'write.mode' = 'string'\\n\"\n                + \")\";\n\n        String insertSql = \"INSERT INTO redis_sink_table\\n\"\n                + \"SELECT o.f0, o.f1\\n\"\n                + \"FROM leftTable AS o\\n\";\n\n        tEnv.executeSql(sql);\n\n        tEnv.executeSql(insertSql);\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\", \"b\", 1L));\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/SocketSourceTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableResult;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class SocketSourceTest {\n\n    public static void main(String[] args) throws Exception {\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        TableResult tr = tEnv.executeSql(\n                \"CREATE TABLE UserScores (name STRING, score INT)\\n\"\n                        + \"WITH (\\n\"\n                        + \"  'connector' = 'socket',\\n\"\n                        + \"  'hostname' = 'localhost',\\n\"\n                        + \"  'port' = '9999',\\n\"\n                        + \"  'byte-delimiter' = '10',\\n\"\n                        + \"  'format' = 'changelog-csv',\\n\"\n                        + \"  'changelog-csv.column-delimiter' = '|'\\n\"\n                        + \")\"\n        );\n\n//        TableResult tr = tEnv.executeSql(\n//                \"CREATE TABLE Orders (\\n\"\n//                        + \"    order_number BIGINT,\\n\"\n//                        + \"    price        DECIMAL(32,2),\\n\"\n//                        + \"    buyer        ROW<first_name STRING, last_name STRING>,\\n\"\n//                        + \"    order_time   TIMESTAMP(3)\\n\"\n//                        + \") WITH (\\n\"\n//                        + \"  'connector' = 'datagen',\\n\"\n//                        + \"  'number-of-rows' = '10',\\n\"\n//                        + \"  'rows-per-second' = '1'\\n\"\n//                        + \")\"\n//        );\n\n//        Table t = tEnv.sqlQuery(\"SELECT * FROM Orders\");\n        Table t = tEnv.sqlQuery(\"SELECT name, SUM(score) FROM UserScores GROUP BY name\");\n\n        tEnv.toRetractStream(t, Row.class).print();\n\n        env.execute(\"测试\");\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/TableApiKafkaSourceTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\npublic class TableApiKafkaSourceTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r\n                , Schema\n                        .newBuilder()\n                        .column(\"f0\", \"string\")\n                        .column(\"f1\", \"string\")\n                        .column(\"f2\", \"bigint\")\n                        .columnByExpression(\"proctime\", \"PROCTIME()\")\n                        .build());\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable);\n\n        String selectWhereSql = \"select f0 from source_table where f1 = 'b'\";\n\n        Table resultTable = tEnv.sqlQuery(selectWhereSql);\n\n        tEnv.toRetractStream(resultTable, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/UpsertKafkaSinkProtobufFormatSupportTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.configuration.Configuration;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * zk：https://www.jianshu.com/p/5491d16e6abd\n * /usr/local/Cellar/zookeeper/3.4.13/bin/zkServer start\n *\n * kafka：https://www.jianshu.com/p/dd2578d47ff6\n * /usr/local/Cellar/kafka/2.2.1/bin/kafka-server-start /usr/local/Cellar/kafka/2.2.1/libexec/config/server.properties &\n *\n * 创建 topic：kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic tuzisir\n * 查看 topic：kafka-topics --list --zookeeper localhost:2181\n * 向 topic 发消息：kafka-console-producer --broker-list localhost:9092 --topic tuzisir\n * 从 topic 消费消息：kafka-console-consumer --bootstrap-server localhost:9092 --topic tuzisir --from-beginning\n */\npublic class UpsertKafkaSinkProtobufFormatSupportTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        Configuration configuration = flinkEnv.streamTEnv().getConfig().getConfiguration();\n        // set low-level key-value options\n\n        configuration.setString(\"table.exec.mini-batch.enabled\", \"true\"); // enable mini-batch optimization\n        configuration.setString(\"table.exec.mini-batch.allow-latency\", \"5 s\"); // use 5 seconds to buffer input records\n        configuration.setString(\"table.exec.mini-batch.size\", \"5000\"); // the maximum number of records can be buffered by each aggregate operator task\n        configuration.setString(\"pipeline.name\", \"GROUP AGG MINI BATCH 案例\"); // the maximum number of records can be buffered by each aggregate operator task\n\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    count_result BIGINT,\\n\"\n                + \"    sum_result BIGINT,\\n\"\n                + \"    avg_result DOUBLE,\\n\"\n                + \"    min_result BIGINT,\\n\"\n                + \"    max_result BIGINT,\\n\"\n                + \"    PRIMARY KEY (`order_id`) NOT ENFORCED\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'upsert-kafka',\\n\"\n                + \"  'topic' = 'tuzisir',\\n\"\n                + \"  'properties.bootstrap.servers' = 'localhost:9092',\\n\"\n                + \"  'key.format' = 'json',\\n\"\n                + \"  'value.format' = 'protobuf',\\n\"\n                + \"  'value.protobuf.class-name' = 'flink.examples.sql._04.format.formats.protobuf.Test'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/UpsertKafkaSinkTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport org.apache.flink.configuration.Configuration;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * zk：https://www.jianshu.com/p/5491d16e6abd\n * /usr/local/Cellar/zookeeper/3.4.13/bin/zkServer start\n *\n * kafka：https://www.jianshu.com/p/dd2578d47ff6\n * /usr/local/Cellar/kafka/2.2.1/bin/kafka-server-start /usr/local/Cellar/kafka/2.2.1/libexec/config/server.properties &\n *\n * 创建 topic：kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic tuzisir\n * 查看 topic：kafka-topics --list --zookeeper localhost:2181\n * 向 topic 发消息：kafka-console-producer --broker-list localhost:9092 --topic tuzisir\n * 从 topic 消费消息：kafka-console-consumer --bootstrap-server localhost:9092 --topic tuzisir --from-beginning\n */\npublic class UpsertKafkaSinkTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        Configuration configuration = flinkEnv.streamTEnv().getConfig().getConfiguration();\n        // set low-level key-value options\n\n        configuration.setString(\"table.exec.mini-batch.enabled\", \"true\"); // enable mini-batch optimization\n        configuration.setString(\"table.exec.mini-batch.allow-latency\", \"5 s\"); // use 5 seconds to buffer input records\n        configuration.setString(\"table.exec.mini-batch.size\", \"5000\"); // the maximum number of records can be buffered by each aggregate operator task\n        configuration.setString(\"pipeline.name\", \"GROUP AGG MINI BATCH 案例\"); // the maximum number of records can be buffered by each aggregate operator task\n\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    count_result BIGINT,\\n\"\n                + \"    sum_result BIGINT,\\n\"\n                + \"    avg_result DOUBLE,\\n\"\n                + \"    min_result BIGINT,\\n\"\n                + \"    max_result BIGINT,\\n\"\n                + \"    PRIMARY KEY (`order_id`) NOT ENFORCED\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'upsert-kafka',\\n\"\n                + \"  'topic' = 'tuzisir',\\n\"\n                + \"  'properties.bootstrap.servers' = 'localhost:9092',\\n\"\n                + \"  'key.format' = 'json',\\n\"\n                + \"  'value.format' = 'json'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/UserDefinedSourceTest.java",
    "content": "package flink.examples.sql._03.source_sink;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class UserDefinedSourceTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.table.user_defined.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table;\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/sink/Abilities_SinkFunction.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.sink;\n\nimport org.apache.flink.api.common.functions.util.PrintSinkOutputWriter;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.sink.RichSinkFunction;\nimport org.apache.flink.streaming.api.functions.sink.SinkFunction;\nimport org.apache.flink.streaming.api.operators.StreamingRuntimeContext;\nimport org.apache.flink.table.connector.sink.DynamicTableSink.DataStructureConverter;\nimport org.apache.flink.table.data.RowData;\n\npublic class Abilities_SinkFunction extends RichSinkFunction<RowData> {\n\n    private static final long serialVersionUID = 1L;\n\n    private final DataStructureConverter converter;\n    private final PrintSinkOutputWriter<String> writer;\n\n    public Abilities_SinkFunction(\n            DataStructureConverter converter, String printIdentifier, boolean stdErr) {\n        this.converter = converter;\n        this.writer = new PrintSinkOutputWriter<>(printIdentifier, stdErr);\n    }\n\n    @Override\n    public void open(Configuration parameters) throws Exception {\n        super.open(parameters);\n        StreamingRuntimeContext context = (StreamingRuntimeContext) getRuntimeContext();\n        writer.open(context.getIndexOfThisSubtask(), context.getNumberOfParallelSubtasks());\n    }\n\n    @Override\n    public void invoke(RowData value, SinkFunction.Context context) {\n        Object data = converter.toExternal(value);\n        assert data != null;\n        writer.write(data.toString());\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/sink/Abilities_TableSink.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.sink;\n\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport javax.annotation.Nullable;\n\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.sink.DynamicTableSink;\nimport org.apache.flink.table.connector.sink.SinkFunctionProvider;\nimport org.apache.flink.table.connector.sink.abilities.SupportsOverwrite;\nimport org.apache.flink.table.connector.sink.abilities.SupportsPartitioning;\nimport org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata;\nimport org.apache.flink.table.types.DataType;\n\nimport com.google.common.collect.Maps;\n\nimport flink.examples.JacksonUtils;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class Abilities_TableSink implements DynamicTableSink\n        , SupportsOverwrite\n        , SupportsPartitioning\n        , SupportsWritingMetadata {\n\n    private DataType type;\n    private final String printIdentifier;\n    private final boolean stdErr;\n    private final @Nullable\n    Integer parallelism;\n    private boolean overwrite = false;\n    private Map<String, String> staticPartition;\n\n    public Abilities_TableSink(\n            DataType type, String printIdentifier, boolean stdErr, Integer parallelism) {\n        this.type = type;\n        this.printIdentifier = printIdentifier;\n        this.stdErr = stdErr;\n        this.parallelism = parallelism;\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode(ChangelogMode requestedMode) {\n        return requestedMode;\n    }\n\n    @Override\n    public SinkRuntimeProvider getSinkRuntimeProvider(DynamicTableSink.Context context) {\n        DataStructureConverter converter = context.createDataStructureConverter(type);\n        return SinkFunctionProvider.of(\n                new Abilities_SinkFunction(converter, printIdentifier, stdErr), parallelism);\n    }\n\n    @Override\n    public DynamicTableSink copy() {\n        return new Abilities_TableSink(type, printIdentifier, stdErr, parallelism);\n    }\n\n    @Override\n    public String asSummaryString() {\n        return \"Print to \" + (stdErr ? \"System.err\" : \"System.out\");\n    }\n\n    @Override\n    public void applyOverwrite(boolean overwrite) {\n        this.overwrite = overwrite;\n    }\n\n    @Override\n    public void applyStaticPartition(Map<String, String> partition) {\n        this.staticPartition = Maps.newHashMap(partition);\n    }\n\n    @Override\n    public Map<String, DataType> listWritableMetadata() {\n        return new HashMap<String, DataType>() {{\n            put(\"flink_write_timestamp\", DataTypes.BIGINT());\n        }};\n    }\n\n    @Override\n    public void applyWritableMetadata(List<String> metadataKeys, DataType consumedDataType) {\n\n        this.type = consumedDataType;\n\n        log.info(\"metadataKeys:\" + JacksonUtils.bean2Json(metadataKeys));\n        log.info(\"consumedDataType:\" + JacksonUtils.bean2Json(consumedDataType));\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/sink/Abilities_TableSinkFactory.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.sink;\n\nimport static org.apache.flink.configuration.ConfigOptions.key;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.connector.sink.DynamicTableSink;\nimport org.apache.flink.table.factories.DynamicTableSinkFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\n\npublic class Abilities_TableSinkFactory implements DynamicTableSinkFactory {\n\n    public static final String IDENTIFIER = \"abilities_print\";\n\n    public static final ConfigOption<String> PRINT_IDENTIFIER =\n            key(\"print-identifier\")\n                    .stringType()\n                    .noDefaultValue()\n                    .withDescription(\n                            \"Message that identify print and is prefixed to the output of the value.\");\n\n    public static final ConfigOption<Boolean> STANDARD_ERROR =\n            key(\"standard-error\")\n                    .booleanType()\n                    .defaultValue(false)\n                    .withDescription(\n                            \"True, if the format should print to standard error instead of standard out.\");\n\n    @Override\n    public String factoryIdentifier() {\n        return IDENTIFIER;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        return new HashSet<>();\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(PRINT_IDENTIFIER);\n        options.add(STANDARD_ERROR);\n        options.add(FactoryUtil.SINK_PARALLELISM);\n        return options;\n    }\n\n    @Override\n    public DynamicTableSink createDynamicTableSink(Context context) {\n        FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n        helper.validate();\n        ReadableConfig options = helper.getOptions();\n        return new Abilities_TableSink(\n                context.getCatalogTable().getResolvedSchema().toPhysicalRowDataType(),\n                options.get(PRINT_IDENTIFIER),\n                options.get(STANDARD_ERROR),\n                options.getOptional(FactoryUtil.SINK_PARALLELISM).orElse(null));\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/sink/_01_SupportsWritingMetadata_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.sink;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _01_SupportsWritingMetadata_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT METADATA VIRTUAL,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_write_timestamp BIGINT METADATA,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'abilities_print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    user_id\\n\"\n                + \"    , flink_read_timestamp as flink_write_timestamp\\n\"\n                + \"    , name\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/Abilities_SourceFunction.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.streaming.api.watermark.Watermark;\nimport org.apache.flink.table.data.RowData;\n\nimport com.google.common.collect.ImmutableMap;\n\nimport flink.examples.JacksonUtils;\n\npublic class Abilities_SourceFunction extends RichSourceFunction<RowData> {\n\n    private DeserializationSchema<RowData> dser;\n\n    private long limit = -1;\n\n    private volatile boolean isCancel = false;\n\n    private boolean enableSourceWatermark = false;\n\n    public Abilities_SourceFunction(DeserializationSchema<RowData> dser) {\n        this.dser = dser;\n    }\n\n    public Abilities_SourceFunction(DeserializationSchema<RowData> dser, long limit) {\n        this.dser = dser;\n        this.limit = limit;\n    }\n\n    public Abilities_SourceFunction(DeserializationSchema<RowData> dser, boolean enableSourceWatermark) {\n        this.dser = dser;\n        this.enableSourceWatermark = enableSourceWatermark;\n    }\n\n    @Override\n    public void run(SourceContext<RowData> ctx) throws Exception {\n        int i = 0;\n        while (!this.isCancel) {\n\n            long currentTimeMills = System.currentTimeMillis();\n\n            ctx.collect(this.dser.deserialize(\n                    JacksonUtils.bean2Json(ImmutableMap.of(\n                            \"user_id\", 11111L + i\n                            , \"name\", \"antigeneral\"\n                            , \"flink_read_timestamp\", currentTimeMills + \"\")).getBytes()\n            ));\n            Thread.sleep(1000);\n            i++;\n\n            if (limit >= 0 && i > limit) {\n                this.isCancel = true;\n            }\n\n            if (enableSourceWatermark) {\n                ctx.emitWatermark(new Watermark(currentTimeMills));\n            }\n        }\n    }\n\n    @Override\n    public void cancel() {\n        this.isCancel = true;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/Abilities_TableSource.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.HashMap;\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Optional;\n\nimport org.apache.flink.api.common.eventtime.WatermarkStrategy;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.api.TableColumn.MetadataColumn;\nimport org.apache.flink.table.api.TableSchema;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.ScanTableSource;\nimport org.apache.flink.table.connector.source.SourceFunctionProvider;\nimport org.apache.flink.table.connector.source.abilities.SupportsFilterPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsLimitPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsProjectionPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata;\nimport org.apache.flink.table.connector.source.abilities.SupportsSourceWatermark;\nimport org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.expressions.ResolvedExpression;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.utils.TableSchemaUtils;\n\nimport com.google.common.collect.Lists;\n\nimport lombok.SneakyThrows;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class Abilities_TableSource implements ScanTableSource\n        , SupportsFilterPushDown // 过滤条件下推\n        , SupportsLimitPushDown // limit 条件下推\n        , SupportsPartitionPushDown //\n        , SupportsProjectionPushDown // select 下推\n        , SupportsReadingMetadata // 元数据\n        , SupportsWatermarkPushDown\n        , SupportsSourceWatermark {\n\n    private final String className;\n    private final DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n    private final DataType sourceRowDataType;\n    private DataType producedDataType;\n    private TableSchema physicalSchema;\n    private TableSchema tableSchema;\n    private long limit = -1;\n    private WatermarkStrategy<RowData> watermarkStrategy;\n    private boolean enableSourceWatermark;\n    private List<ResolvedExpression> filters;\n    private List<String> metadataKeys;\n\n    public Abilities_TableSource(\n            String className,\n            DecodingFormat<DeserializationSchema<RowData>> decodingFormat,\n            DataType sourceRowDataType,\n            DataType producedDataType,\n            TableSchema physicalSchema,\n            TableSchema tableSchema) {\n        DataTypes.BIGINT();\n\n        this.className = className;\n        this.decodingFormat = decodingFormat;\n        this.sourceRowDataType = sourceRowDataType;\n        this.producedDataType = producedDataType;\n        this.physicalSchema = physicalSchema;\n        this.tableSchema = tableSchema;\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode() {\n        // in our example the format decides about the changelog mode\n        // but it could also be the source itself\n        return decodingFormat.getChangelogMode();\n    }\n\n    @SneakyThrows\n    @Override\n    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext runtimeProviderContext) {\n\n        // create runtime classes that are shipped to the cluster\n\n        final DeserializationSchema<RowData> deserializer = decodingFormat.createRuntimeDecoder(\n                runtimeProviderContext,\n                this.producedDataType);\n\n        Class<?> clazz = this.getClass().getClassLoader().loadClass(className);\n\n        RichSourceFunction<RowData> r;\n\n        if (limit > 0) {\n            r = (RichSourceFunction<RowData>) clazz.getConstructor(DeserializationSchema.class, long.class).newInstance(deserializer, this.limit);\n        } else if (enableSourceWatermark) {\n            r = (RichSourceFunction<RowData>) clazz.getConstructor(DeserializationSchema.class, boolean.class).newInstance(deserializer, this.enableSourceWatermark);\n        } else {\n            r = (RichSourceFunction<RowData>) clazz.getConstructor(DeserializationSchema.class).newInstance(deserializer);\n        }\n\n        return SourceFunctionProvider.of(r, false);\n    }\n\n    @Override\n    public DynamicTableSource copy() {\n        return new Abilities_TableSource(className, decodingFormat, sourceRowDataType, producedDataType, physicalSchema, tableSchema);\n    }\n\n    @Override\n    public String asSummaryString() {\n        return \"Socket Table Source\";\n    }\n\n    @Override\n    public Result applyFilters(List<ResolvedExpression> filters) {\n\n        this.filters = new LinkedList<>(filters);\n\n        // 不上推任何过滤条件\n//        return Result.of(Lists.newLinkedList(), filters);\n        // 将所有的过滤条件都上推到 source\n        return Result.of(filters, Lists.newLinkedList());\n    }\n\n    @Override\n    public void applyLimit(long limit) {\n        this.limit = limit;\n    }\n\n    @Override\n    public Optional<List<Map<String, String>>> listPartitions() {\n        return Optional.empty();\n    }\n\n    @Override\n    public void applyPartitions(List<Map<String, String>> remainingPartitions) {\n        System.out.println(1);\n    }\n\n    @Override\n    public boolean supportsNestedProjection() {\n        return false;\n    }\n\n    @Override\n    public void applyProjection(int[][] projectedFields) {\n        this.tableSchema = projectSchemaWithMetadata(this.tableSchema, projectedFields);\n    }\n\n    @Override\n    public Map<String, DataType> listReadableMetadata() {\n        return new HashMap<String, DataType>() {{\n            put(\"flink_read_timestamp\", DataTypes.BIGINT());\n        }};\n    }\n\n    @Override\n    public void applyReadableMetadata(List<String> metadataKeys, DataType producedDataType) {\n        this.metadataKeys = metadataKeys;\n        this.producedDataType = producedDataType;\n    }\n\n    @Override\n    public void applyWatermark(WatermarkStrategy<RowData> watermarkStrategy) {\n        log.info(\"Successfully applyWatermark\");\n\n        this.watermarkStrategy = watermarkStrategy;\n    }\n\n    @Override\n    public void applySourceWatermark() {\n        log.info(\"Successfully applySourceWatermark\");\n\n        this.enableSourceWatermark = true;\n    }\n\n    public static TableSchema projectSchemaWithMetadata(TableSchema tableSchema, int[][] projectedFields) {\n\n        TableSchema.Builder builder = new TableSchema.Builder();\n        TableSchema physicalProjectedSchema = TableSchemaUtils.projectSchema(TableSchemaUtils.getPhysicalSchema(tableSchema), projectedFields);\n\n        physicalProjectedSchema\n                .getTableColumns()\n                .forEach(\n                        tableColumn -> {\n                            if (tableColumn.isPhysical()) {\n                                builder.field(tableColumn.getName(), tableColumn.getType());\n                            }\n                        });\n\n        tableSchema\n                .getTableColumns()\n                .forEach(\n                        tableColumn -> {\n                            if (tableColumn instanceof MetadataColumn) {\n                                builder.field(tableColumn.getName(), tableColumn.getType());\n                            }\n                        });\n        return builder.build();\n    }\n\n    public static TableSchema getSchemaWithMetadata(TableSchema tableSchema) {\n\n        TableSchema.Builder builder = new TableSchema.Builder();\n\n        tableSchema\n                .getTableColumns()\n                .forEach(\n                        tableColumn -> {\n                            if (tableColumn.isPhysical()) {\n                                builder.field(tableColumn.getName(), tableColumn.getType());\n                            } else if (tableColumn instanceof MetadataColumn) {\n                                builder.field(tableColumn.getName(), tableColumn.getType());\n                            }\n                        });\n        return builder.build();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/Abilities_TableSourceFactory.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.TableSchema;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableSourceFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.utils.TableSchemaUtils;\n\n\npublic class Abilities_TableSourceFactory implements DynamicTableSourceFactory {\n\n    // define all options statically\n    public static final ConfigOption<String> CLASS_NAME = ConfigOptions.key(\"class.name\")\n            .stringType()\n            .noDefaultValue();\n\n    @Override\n    public String factoryIdentifier() {\n        return \"supports_reading_metadata_user_defined\"; // used for matching to `connector = '...'`\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(CLASS_NAME);\n        options.add(FactoryUtil.FORMAT); // use pre-defined option for format\n        return options;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        return options;\n    }\n\n    @Override\n    public DynamicTableSource createDynamicTableSource(Context context) {\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n        final String className = options.get(CLASS_NAME);\n\n        // derive the produced data type (excluding computed columns) from the catalog table\n        final DataType producedDataType =\n                context.getCatalogTable().getResolvedSchema().toPhysicalRowDataType();\n\n        final DataType sourceRowDataType =\n                context.getCatalogTable().getResolvedSchema().toSourceRowDataType();\n\n        final DataType sinkRowDataType =\n                context.getCatalogTable().getResolvedSchema().toSinkRowDataType();\n\n        final Schema schema =\n                context.getCatalogTable().getUnresolvedSchema();\n\n        TableSchema physicalSchema =\n                TableSchemaUtils.getPhysicalSchema(context.getCatalogTable().getSchema());\n\n\n        TableSchema tableSchema = context.getCatalogTable().getSchema();\n\n        // create and return dynamic table source\n        return new Abilities_TableSource(className\n                , decodingFormat\n                , sourceRowDataType\n                , producedDataType\n                , physicalSchema\n                , tableSchema);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_01_SupportsFilterPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _01_SupportsFilterPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \"WHERE user_id > 3333\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_02_SupportsLimitPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _02_SupportsLimitPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \"LIMIT 100\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_03_SupportsPartitionPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _03_SupportsPartitionPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT METADATA VIRTUAL,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \"LIMIT 100\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_04_SupportsProjectionPushDown_JDBC_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _04_SupportsProjectionPushDown_JDBC_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    id DECIMAL(20, 0),\\n\"\n                + \"    name STRING,\\n\"\n                + \"    owner STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'jdbc',\\n\"\n                + \"  'url' = 'jdbc:mysql://localhost:3306/user_profile',\\n\"\n                + \"  'username' = 'root',\\n\"\n                + \"  'password' = 'root123456',\\n\"\n                + \"  'table-name' = 'user_test'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table_2 (\\n\"\n                + \"    id DECIMAL(20, 0),\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table_2\\n\"\n                + \"SELECT\\n\"\n                + \"    id\\n\"\n                + \"    , name\\n\"\n                + \"FROM source_table_1\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_04_SupportsProjectionPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _04_SupportsProjectionPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name1` STRING,\\n\"\n                + \"    `name2` STRING,\\n\"\n                + \"    `name3` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    user_id\\n\"\n                + \"    , name1 as name\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_05_SupportsReadingMetadata_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _05_SupportsReadingMetadata_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT METADATA VIRTUAL,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_06_SupportsWatermarkPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _06_SupportsWatermarkPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT METADATA VIRTUAL,\\n\"\n                + \"    time_ltz AS TO_TIMESTAMP_LTZ(flink_read_timestamp, 3),\\n\"\n                + \"    `name` STRING,\\n\"\n                + \"    WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    user_id,\\n\"\n                + \"    flink_read_timestamp,\\n\"\n                + \"    name\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/_07_SupportsSourceWatermark_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _07_SupportsSourceWatermark_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT METADATA VIRTUAL,\\n\"\n                + \"    row_time AS TO_TIMESTAMP_LTZ(flink_read_timestamp, 3),\\n\"\n                + \"    WATERMARK FOR row_time AS SOURCE_WATERMARK()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end bigint,\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      count(distinct user_id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '10' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/Before_Abilities_SourceFunction.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.streaming.api.watermark.Watermark;\nimport org.apache.flink.table.data.RowData;\n\nimport com.google.common.collect.ImmutableMap;\n\nimport flink.examples.JacksonUtils;\n\npublic class Before_Abilities_SourceFunction extends RichSourceFunction<RowData> {\n\n    private DeserializationSchema<RowData> dser;\n\n    private long limit = -1;\n\n    private volatile boolean isCancel = false;\n\n    private boolean enableSourceWatermark = false;\n\n    public Before_Abilities_SourceFunction(DeserializationSchema<RowData> dser) {\n        this.dser = dser;\n    }\n\n    public Before_Abilities_SourceFunction(DeserializationSchema<RowData> dser, long limit) {\n        this.dser = dser;\n        this.limit = limit;\n    }\n\n    public Before_Abilities_SourceFunction(DeserializationSchema<RowData> dser, boolean enableSourceWatermark) {\n        this.dser = dser;\n        this.enableSourceWatermark = enableSourceWatermark;\n    }\n\n    @Override\n    public void run(SourceContext<RowData> ctx) throws Exception {\n        int i = 0;\n        while (!this.isCancel) {\n\n            long currentTimeMills = System.currentTimeMillis();\n\n            ctx.collect(this.dser.deserialize(\n                    JacksonUtils.bean2Json(ImmutableMap.of(\n                            \"user_id\", 11111L + i\n                            , \"name\", \"antigeneral\"\n                            , \"flink_read_timestamp\", currentTimeMills + \"\")).getBytes()\n            ));\n            Thread.sleep(1000);\n            i++;\n\n            if (limit >= 0 && i > limit) {\n                this.isCancel = true;\n            }\n\n            if (enableSourceWatermark) {\n                ctx.emitWatermark(new Watermark(currentTimeMills));\n            }\n        }\n    }\n\n    @Override\n    public void cancel() {\n        this.isCancel = true;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/Before_Abilities_TableSource.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport org.apache.flink.api.common.eventtime.WatermarkStrategy;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.api.TableColumn.MetadataColumn;\nimport org.apache.flink.table.api.TableSchema;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.ScanTableSource;\nimport org.apache.flink.table.connector.source.SourceFunctionProvider;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.DataType;\n\nimport lombok.SneakyThrows;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class Before_Abilities_TableSource implements ScanTableSource {\n\n    private final String className;\n    private final DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n    private final DataType sourceRowDataType;\n    private final DataType producedDataType;\n    private TableSchema physicalSchema;\n    private TableSchema tableSchema;\n    private long limit = -1;\n    private WatermarkStrategy<RowData> watermarkStrategy;\n    boolean enableSourceWatermark;\n\n    public Before_Abilities_TableSource(\n            String className,\n            DecodingFormat<DeserializationSchema<RowData>> decodingFormat,\n            DataType sourceRowDataType,\n            DataType producedDataType,\n            TableSchema physicalSchema,\n            TableSchema tableSchema) {\n        this.className = className;\n        this.decodingFormat = decodingFormat;\n        this.sourceRowDataType = sourceRowDataType;\n        this.producedDataType = producedDataType;\n        this.physicalSchema = physicalSchema;\n        this.tableSchema = tableSchema;\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode() {\n        // in our example the format decides about the changelog mode\n        // but it could also be the source itself\n        return decodingFormat.getChangelogMode();\n    }\n\n    @SneakyThrows\n    @Override\n    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext runtimeProviderContext) {\n\n        final DeserializationSchema<RowData> deserializer = decodingFormat.createRuntimeDecoder(\n                runtimeProviderContext,\n                getSchemaWithMetadata(this.tableSchema).toRowDataType());\n\n        Class<?> clazz = this.getClass().getClassLoader().loadClass(className);\n\n        RichSourceFunction<RowData> r;\n\n        if (limit > 0) {\n            r = (RichSourceFunction<RowData>) clazz.getConstructor(DeserializationSchema.class, long.class).newInstance(deserializer, this.limit);\n        } else if (enableSourceWatermark) {\n            r = (RichSourceFunction<RowData>) clazz.getConstructor(DeserializationSchema.class, boolean.class).newInstance(deserializer, this.enableSourceWatermark);\n        } else {\n            r = (RichSourceFunction<RowData>) clazz.getConstructor(DeserializationSchema.class).newInstance(deserializer);\n        }\n\n        return SourceFunctionProvider.of(r, false);\n    }\n\n    @Override\n    public DynamicTableSource copy() {\n        return new Before_Abilities_TableSource(className, decodingFormat, sourceRowDataType, producedDataType, physicalSchema, tableSchema);\n    }\n\n    @Override\n    public String asSummaryString() {\n        return \"Socket Table Source\";\n    }\n\n    public static TableSchema getSchemaWithMetadata(TableSchema tableSchema) {\n\n        TableSchema.Builder builder = new TableSchema.Builder();\n\n        tableSchema\n                .getTableColumns()\n                .forEach(\n                        tableColumn -> {\n                            if (tableColumn.isPhysical()) {\n                                builder.field(tableColumn.getName(), tableColumn.getType());\n                            } else if (tableColumn instanceof MetadataColumn) {\n                                builder.field(tableColumn.getName(), tableColumn.getType());\n                            }\n                        });\n        return builder.build();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/Before_Abilities_TableSourceFactory.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.TableSchema;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableSourceFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.utils.TableSchemaUtils;\n\n\npublic class Before_Abilities_TableSourceFactory implements DynamicTableSourceFactory {\n\n    // define all options statically\n    public static final ConfigOption<String> CLASS_NAME = ConfigOptions.key(\"class.name\")\n            .stringType()\n            .noDefaultValue();\n\n    @Override\n    public String factoryIdentifier() {\n        return \"before_supports_reading_metadata_user_defined\"; // used for matching to `connector = '...'`\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(CLASS_NAME);\n        options.add(FactoryUtil.FORMAT); // use pre-defined option for format\n        return options;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        return options;\n    }\n\n    @Override\n    public DynamicTableSource createDynamicTableSource(Context context) {\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n        final String className = options.get(CLASS_NAME);\n\n        // derive the produced data type (excluding computed columns) from the catalog table\n        final DataType producedDataType =\n                context.getCatalogTable().getResolvedSchema().toPhysicalRowDataType();\n\n        final DataType sourceRowDataType =\n                context.getCatalogTable().getResolvedSchema().toSourceRowDataType();\n\n        final DataType sinkRowDataType =\n                context.getCatalogTable().getResolvedSchema().toSinkRowDataType();\n\n        final Schema schema =\n                context.getCatalogTable().getUnresolvedSchema();\n\n        TableSchema physicalSchema =\n                TableSchemaUtils.getPhysicalSchema(context.getCatalogTable().getSchema());\n\n\n        TableSchema tableSchema = context.getCatalogTable().getSchema();\n\n        // create and return dynamic table source\n        return new Before_Abilities_TableSource(className\n                , decodingFormat\n                , sourceRowDataType\n                , producedDataType\n                , physicalSchema\n                , tableSchema);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_01_Before_SupportsFilterPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _01_Before_SupportsFilterPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \"WHERE user_id > 3333\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_02_Before_SupportsLimitPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _02_Before_SupportsLimitPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \"LIMIT 100\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_03_Before_SupportsPartitionPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _03_Before_SupportsPartitionPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT METADATA VIRTUAL,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    flink_read_timestamp BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\\n\"\n                + \"LIMIT 100\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_04_Before_SupportsProjectionPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _04_Before_SupportsProjectionPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name1` STRING,\\n\"\n                + \"    `name2` STRING,\\n\"\n                + \"    `name3` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    user_id\\n\"\n                + \"    , name1 as name\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_05_Before_SupportsReadingMetadata_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _05_Before_SupportsReadingMetadata_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `name` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    *\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_06_Before_SupportsWatermarkPushDown_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _06_Before_SupportsWatermarkPushDown_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    time_ltz AS cast(CURRENT_TIMESTAMP as TIMESTAMP(3)),\\n\"\n                + \"    `name` STRING,\\n\"\n                + \"    WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    user_id,\\n\"\n                + \"    name\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/abilities/source/before/_07_Before_SupportsSourceWatermark_Test.java",
    "content": "package flink.examples.sql._03.source_sink.abilities.source.before;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class _07_Before_SupportsSourceWatermark_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 用户自定义 SOURCE 案例\");\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as TIMESTAMP(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'before_supports_reading_metadata_user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_SourceFunction'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end bigint,\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      count(distinct user_id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '10' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/ddl/TableApiDDLTest.java",
    "content": "package flink.examples.sql._03.source_sink.ddl;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.descriptors.CustomConnectorDescriptor;\nimport org.apache.flink.table.descriptors.Schema;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport flink.examples.sql._05.format.formats.protobuf.descriptors.Protobuf;\n\n\npublic class TableApiDDLTest {\n\n    // https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/overview/\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n//        flinkEnv.getStreamTableEnvironment().getConfig().getConfiguration().setString(\"table.exec.emit.early-fire.enabled\", \"true\");\n//        flinkEnv.getStreamTableEnvironment().getConfig().getConfiguration().setString(\"table.exec.emit.early-fire.delay\", \"60 s\");\n\n        String sql = \"CREATE TABLE redis_sink_table (\\n\"\n                + \"    key STRING,\\n\"\n                + \"    `value` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'write.mode' = 'string'\\n\"\n                + \")\";\n\n        // create and register a TableSink\n        final Schema schema = new Schema()\n                .field(\"key\", DataTypes.STRING())\n                .field(\"value\", DataTypes.STRING());\n\n        flinkEnv.getStreamTableEnvironment()\n                .connect(\n                        new CustomConnectorDescriptor(\"redis\", 1, true)\n                        .property(\"hostname\", \"127.0.0.1\")\n                        .property(\"port\", \"6379\")\n                        .property(\"write.mode\", \"string\")\n                )\n                .withFormat(new Protobuf())\n                .withSchema(schema)\n                .createTemporaryTable(\"redis_sink_table\");\n\n\n        DataStream<Row> r = flinkEnv.getStreamExecutionEnvironment().addSource(new UserDefinedSource());\n\n        Table sourceTable = flinkEnv.getStreamTableEnvironment().fromDataStream(r, org.apache.flink.table.api.Schema.newBuilder()\n                .columnByExpression(\"proctime\", \"PROCTIME()\")\n                .build());\n\n        flinkEnv.getStreamTableEnvironment()\n                .createTemporaryView(\"leftTable\", sourceTable);\n\n        String insertSql = \"INSERT INTO redis_sink_table\\n\"\n                + \"SELECT o.f0, o.f1\\n\"\n                + \"FROM leftTable AS o\\n\";\n\n        flinkEnv.getStreamTableEnvironment().executeSql(sql);\n\n        flinkEnv.getStreamTableEnvironment().executeSql(insertSql);\n\n    }\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\", \"b\", 1L));\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/container/RedisCommandsContainer.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOTICE file distributed with\n * this work for additional information regarding copyright ownership.\n * The ASF licenses this file to You under the Apache License, Version 2.0\n * (the \"License\"); you may not use this file except in compliance with\n * the License.  You may obtain a copy of the License at\n *\n *    http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\npackage flink.examples.sql._03.source_sink.table.redis.container;\n\nimport java.io.Closeable;\nimport java.io.Serializable;\nimport java.util.List;\n\n/**\n * The container for all available Redis commands.\n */\npublic interface RedisCommandsContainer extends Closeable, Serializable {\n\n    void open() throws Exception;\n\n    byte[] get(byte[] key);\n\n    List<Object> multiGet(List<byte[]> key);\n\n    byte[] hget(byte[] key, byte[] hashField);\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/container/RedisCommandsContainerBuilder.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.container;\n\nimport java.util.Objects;\n\nimport org.apache.commons.pool2.impl.GenericObjectPoolConfig;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisConfigBase;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;\n\nimport redis.clients.jedis.JedisPool;\n\n\npublic class RedisCommandsContainerBuilder {\n\n    public static RedisCommandsContainer build(FlinkJedisConfigBase flinkJedisConfigBase) {\n        if (flinkJedisConfigBase instanceof FlinkJedisPoolConfig) {\n            FlinkJedisPoolConfig flinkJedisPoolConfig = (FlinkJedisPoolConfig) flinkJedisConfigBase;\n            return RedisCommandsContainerBuilder.build(flinkJedisPoolConfig);\n        }\n\n//        else if (flinkJedisConfigBase instanceof FlinkJedisClusterConfig) {\n//            FlinkJedisClusterConfig flinkJedisClusterConfig = (FlinkJedisClusterConfig) flinkJedisConfigBase;\n//            return RedisCommandsContainerBuilder.build(flinkJedisClusterConfig);\n//        } else if (flinkJedisConfigBase instanceof FlinkJedisSentinelConfig) {\n//            FlinkJedisSentinelConfig flinkJedisSentinelConfig = (FlinkJedisSentinelConfig) flinkJedisConfigBase;\n//            return RedisCommandsContainerBuilder.build(flinkJedisSentinelConfig);\n//        }\n\n        else {\n            throw new IllegalArgumentException(\"Jedis configuration not found\");\n        }\n    }\n\n    public static RedisCommandsContainer build(FlinkJedisPoolConfig jedisPoolConfig) {\n        Objects.requireNonNull(jedisPoolConfig, \"Redis pool config should not be Null\");\n\n        GenericObjectPoolConfig genericObjectPoolConfig = new GenericObjectPoolConfig();\n        genericObjectPoolConfig.setMaxIdle(jedisPoolConfig.getMaxIdle());\n        genericObjectPoolConfig.setMaxTotal(jedisPoolConfig.getMaxTotal());\n        genericObjectPoolConfig.setMinIdle(jedisPoolConfig.getMinIdle());\n\n        JedisPool jedisPool = new JedisPool(genericObjectPoolConfig, jedisPoolConfig.getHost(),\n                jedisPoolConfig.getPort(), jedisPoolConfig.getConnectionTimeout(), jedisPoolConfig.getPassword(),\n                jedisPoolConfig.getDatabase());\n        return new RedisContainer(jedisPool);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/container/RedisContainer.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.container;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.util.List;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport redis.clients.jedis.Jedis;\nimport redis.clients.jedis.JedisPool;\nimport redis.clients.jedis.JedisSentinelPool;\nimport redis.clients.jedis.Pipeline;\n\n\npublic class RedisContainer implements RedisCommandsContainer, Closeable {\n\n    private static final long serialVersionUID = 1L;\n\n    private transient JedisPool jedisPool;\n    private transient JedisSentinelPool jedisSentinelPool;\n\n    private static final Logger\n            LOG = LoggerFactory.getLogger(RedisContainer.class);\n\n\n    public RedisContainer(JedisPool jedisPool) {\n\n        this.jedisPool = jedisPool;\n        this.jedisSentinelPool = null;\n    }\n\n    public RedisContainer(JedisSentinelPool sentinelPool) {\n\n        this.jedisPool = null;\n        this.jedisSentinelPool = sentinelPool;\n    }\n\n    private Jedis getInstance() {\n        if (jedisSentinelPool != null) {\n            return jedisSentinelPool.getResource();\n        } else {\n            return jedisPool.getResource();\n        }\n    }\n\n    private void releaseInstance(final Jedis jedis) {\n        if (jedis == null) {\n            return;\n        }\n        try {\n            jedis.close();\n        } catch (Exception e) {\n            LOG.error(\"Failed to close (return) instance to pool\", e);\n        }\n    }\n\n    @Override\n    public void open() throws Exception {\n        getInstance().echo(\"Test\");\n    }\n\n    @Override\n    public List<Object> multiGet(List<byte[]> key) {\n        Jedis jedis = null;\n        try {\n            jedis = getInstance();\n            Pipeline pipeline = jedis.pipelined();\n            key.forEach(pipeline::get);\n            return pipeline.syncAndReturnAll();\n        } catch (Exception e) {\n            if (LOG.isErrorEnabled()) {\n                LOG.error(\"Cannot send Redis message with command GET to key {} error message {}\",\n                        key, e.getMessage());\n            }\n            throw e;\n        } finally {\n            releaseInstance(jedis);\n        }\n    }\n\n    @Override\n    public byte[] get(byte[] key) {\n        Jedis jedis = null;\n        try {\n            jedis = getInstance();\n            return jedis.get(key);\n        } catch (Exception e) {\n            if (LOG.isErrorEnabled()) {\n                LOG.error(\"Cannot send Redis message with command GET to key {} error message {}\",\n                        key, e.getMessage());\n            }\n            throw e;\n        } finally {\n            releaseInstance(jedis);\n        }\n    }\n\n    @Override\n    public byte[] hget(byte[] key, byte[] hashField) {\n        Jedis jedis = null;\n        try {\n            jedis = getInstance();\n            return jedis.hget(key, hashField);\n        } catch (Exception e) {\n            if (LOG.isErrorEnabled()) {\n                LOG.error(\"Cannot send Redis message with command HGET to key {} hashField {} error message {}\",\n                        key, hashField, e.getMessage());\n            }\n            throw e;\n        } finally {\n            releaseInstance(jedis);\n        }\n    }\n\n    @Override\n    public void close() throws IOException {\n        if (this.jedisPool != null) {\n            this.jedisPool.close();\n        }\n        if (this.jedisSentinelPool != null) {\n            this.jedisSentinelPool.close();\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/demo/RedisDemo.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.demo;\n\nimport java.util.HashMap;\n\nimport com.google.gson.Gson;\n\nimport redis.clients.jedis.Jedis;\nimport redis.clients.jedis.JedisPool;\n\n/**\n * redis 安装：https://blog.csdn.net/realize_dream/article/details/106227622\n * redis java client：https://www.cnblogs.com/chenyanbin/p/12088796.html\n */\npublic class RedisDemo {\n\n    public static void main(String[] args) {\n        singleConnect();\n        poolConnect();\n    }\n\n    public static void singleConnect() {\n        // jedis单实例连接\n        Jedis jedis = new Jedis(\"127.0.0.1\", 6379);\n        String result = jedis.get(\"a\");\n\n        HashMap<String, Object> h = new HashMap<>();\n\n        h.put(\"name\", \"namehhh\");\n        h.put(\"name1\", \"namehhh111\");\n        h.put(\"score\", 3L);\n\n        String s = new Gson().toJson(h);\n\n        jedis.set(\"a\", s);\n\n        System.out.println(result);\n        jedis.close();\n    }\n\n    public static void poolConnect() {\n        //jedis连接池\n        JedisPool pool = new JedisPool(\"127.0.0.1\", 6379);\n        Jedis jedis = pool.getResource();\n        String result = jedis.get(\"a\");\n        System.out.println(result);\n        jedis.close();\n        pool.close();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/mapper/LookupRedisMapper.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.mapper;\n\n\nimport java.io.IOException;\n\nimport org.apache.flink.api.common.serialization.AbstractDeserializationSchema;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.table.data.RowData;\n\nimport com.google.common.base.Joiner;\n\n\npublic class LookupRedisMapper extends AbstractDeserializationSchema<RowData> implements SerializationSchema<Object[]> {\n\n\n    private DeserializationSchema<RowData> valueDeserializationSchema;\n\n    public LookupRedisMapper(DeserializationSchema<RowData> valueDeserializationSchema) {\n\n        this.valueDeserializationSchema = valueDeserializationSchema;\n\n    }\n\n    public RedisCommandDescription getCommandDescription() {\n        return new RedisCommandDescription(RedisCommand.GET);\n    }\n\n    @Override\n    public RowData deserialize(byte[] message) {\n        try {\n            return this.valueDeserializationSchema.deserialize(message);\n        } catch (IOException e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n    @Override\n    public byte[] serialize(Object[] element) {\n        return Joiner.on(\":\").join(element).getBytes();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/mapper/RedisCommand.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.mapper;\n\nimport org.apache.flink.streaming.connectors.redis.common.mapper.RedisDataType;\n\n\npublic enum RedisCommand {\n\n    GET(RedisDataType.STRING),\n\n    HGET(RedisDataType.HASH),\n\n    ;\n\n    private RedisDataType redisDataType;\n\n    RedisCommand(RedisDataType redisDataType) {\n        this.redisDataType = redisDataType;\n    }\n\n    public RedisDataType getRedisDataType() {\n        return redisDataType;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/mapper/RedisCommandDescription.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.mapper;\n\nimport org.apache.flink.streaming.connectors.redis.common.mapper.RedisDataType;\n\n\npublic class RedisCommandDescription {\n\n    private static final long serialVersionUID = 1L;\n\n    private RedisCommand redisCommand;\n\n    private String additionalKey;\n\n    public RedisCommandDescription(RedisCommand redisCommand, String additionalKey) {\n\n        this.redisCommand = redisCommand;\n        this.additionalKey = additionalKey;\n\n        if (redisCommand.getRedisDataType() == RedisDataType.HASH) {\n            if (additionalKey == null) {\n                throw new IllegalArgumentException(\"Hash should have additional key\");\n            }\n        }\n    }\n\n    public RedisCommandDescription(RedisCommand redisCommand) {\n\n        this(redisCommand, null);\n    }\n\n    public RedisCommand getRedisCommand() {\n        return redisCommand;\n    }\n\n    public String getAdditionalKey() {\n        return additionalKey;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/mapper/SetRedisMapper.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.mapper;\n\nimport org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;\nimport org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;\nimport org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;\nimport org.apache.flink.table.data.RowData;\n\n\npublic class SetRedisMapper implements RedisMapper<RowData> {\n\n    @Override\n    public RedisCommandDescription getCommandDescription() {\n        return new RedisCommandDescription(RedisCommand.SET);\n    }\n\n    @Override\n    public String getKeyFromData(RowData data) {\n        return data.getString(0).toString();\n    }\n\n    @Override\n    public String getValueFromData(RowData data) {\n        return data.getString(1).toString();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/options/RedisLookupOptions.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.options;\n\nimport java.io.Serializable;\n\n\npublic class RedisLookupOptions implements Serializable {\n    private static final long serialVersionUID = 1L;\n    private static final int DEFAULT_MAX_RETRY_TIMES = 3;\n\n    protected final String hostname;\n    protected final int port;\n\n    public String getHostname() {\n        return hostname;\n    }\n\n    public int getPort() {\n        return port;\n    }\n\n    private final long cacheMaxSize;\n    private final long cacheExpireMs;\n    private final int maxRetryTimes;\n    private final boolean lookupAsync;\n\n    private final boolean isBatchMode;\n\n    private final int batchSize;\n\n    private final int batchMinTriggerDelayMs;\n\n    public RedisLookupOptions(\n            long cacheMaxSize\n            , long cacheExpireMs\n            , int maxRetryTimes\n            , boolean lookupAsync\n            , String hostname\n            , int port\n            , boolean isBatchMode\n            , int batchSize\n            , int batchMinTriggerDelayMs) {\n        this.cacheMaxSize = cacheMaxSize;\n        this.cacheExpireMs = cacheExpireMs;\n        this.maxRetryTimes = maxRetryTimes;\n        this.lookupAsync = lookupAsync;\n\n        this.hostname = hostname;\n        this.port = port;\n        this.isBatchMode = isBatchMode;\n        this.batchSize = batchSize;\n        this.batchMinTriggerDelayMs = batchMinTriggerDelayMs;\n    }\n\n    public long getCacheMaxSize() {\n        return cacheMaxSize;\n    }\n\n    public long getCacheExpireMs() {\n        return cacheExpireMs;\n    }\n\n    public int getMaxRetryTimes() {\n        return maxRetryTimes;\n    }\n\n    public boolean getLookupAsync() {\n        return lookupAsync;\n    }\n\n    public static Builder builder() {\n        return new Builder();\n    }\n\n    public boolean isBatchMode() {\n        return isBatchMode;\n    }\n\n    public int getBatchSize() {\n        return batchSize;\n    }\n\n    public int getBatchMinTriggerDelayMs() {\n        return batchMinTriggerDelayMs;\n    }\n\n    /** Builder of {@link RedisLookupOptions}. */\n    public static class Builder {\n        private long cacheMaxSize = -1L;\n        private long cacheExpireMs = 0L;\n        private int maxRetryTimes = DEFAULT_MAX_RETRY_TIMES;\n        private boolean lookupAsync = false;\n\n        private boolean isBatchMode = false;\n\n\n        public Builder setIsBatchMode(boolean isBatchMode) {\n            this.isBatchMode = isBatchMode;\n            return this;\n        }\n\n        private int batchSize = 30;\n\n        public Builder setBatchSize(int batchSize) {\n            this.batchSize = batchSize;\n            return this;\n        }\n\n        private int batchMinTriggerDelayMs = 1000;\n\n        public Builder setBatchMinTriggerDelayMs(int batchMinTriggerDelayMs) {\n            this.batchMinTriggerDelayMs = batchMinTriggerDelayMs;\n            return this;\n        }\n\n        /** optional, lookup cache max size, over this value, the old data will be eliminated. */\n        public Builder setCacheMaxSize(long cacheMaxSize) {\n            this.cacheMaxSize = cacheMaxSize;\n            return this;\n        }\n\n        /** optional, lookup cache expire mills, over this time, the old data will expire. */\n        public Builder setCacheExpireMs(long cacheExpireMs) {\n            this.cacheExpireMs = cacheExpireMs;\n            return this;\n        }\n\n        /** optional, max retry times for Hbase connector. */\n        public Builder setMaxRetryTimes(int maxRetryTimes) {\n            this.maxRetryTimes = maxRetryTimes;\n            return this;\n        }\n\n        /** optional, whether to set async lookup. */\n        public Builder setLookupAsync(boolean lookupAsync) {\n            this.lookupAsync = lookupAsync;\n            return this;\n        }\n\n        protected String hostname = \"localhost\";\n\n        protected int port = 6379;\n\n        /**\n         * optional, lookup cache max size, over this value, the old data will be eliminated.\n         */\n        public Builder setHostname(String hostname) {\n            this.hostname = hostname;\n            return this;\n        }\n\n        /**\n         * optional, lookup cache expire mills, over this time, the old data will expire.\n         */\n        public Builder setPort(int port) {\n            this.port = port;\n            return this;\n        }\n\n        public RedisLookupOptions build() {\n            return new RedisLookupOptions(\n                    cacheMaxSize\n                    , cacheExpireMs\n                    , maxRetryTimes\n                    , lookupAsync\n                    , hostname\n                    , port\n                    , isBatchMode\n                    , batchSize\n                    , batchMinTriggerDelayMs);\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/options/RedisOptions.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.options;\n\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions.WRITE_MODE;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions.WRITE_TTL;\nimport static org.apache.flink.table.types.logical.utils.LogicalTypeChecks.hasRoot;\n\nimport java.time.Duration;\nimport java.util.stream.IntStream;\n\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.logical.LogicalType;\nimport org.apache.flink.table.types.logical.LogicalTypeRoot;\nimport org.apache.flink.table.types.logical.utils.LogicalTypeChecks;\nimport org.apache.flink.util.Preconditions;\n\n\npublic class RedisOptions {\n\n    public static final ConfigOption<Integer> TIMEOUT = ConfigOptions\n            .key(\"timeout\")\n            .intType()\n            .defaultValue(2000)\n            .withDescription(\"Optional timeout for connect to redis\");\n\n    public static final ConfigOption<Integer> MAXIDLE = ConfigOptions\n            .key(\"maxIdle\")\n            .intType()\n            .defaultValue(2)\n            .withDescription(\"Optional maxIdle for connect to redis\");\n\n    public static final ConfigOption<Integer> MINIDLE = ConfigOptions\n            .key(\"minIdle\")\n            .intType()\n            .defaultValue(1)\n            .withDescription(\"Optional minIdle for connect to redis\");\n\n    public static final ConfigOption<String> PASSWORD = ConfigOptions\n            .key(\"password\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional password for connect to redis\");\n\n    public static final ConfigOption<Integer> PORT = ConfigOptions\n            .key(\"port\")\n            .intType()\n            .defaultValue(6379)\n            .withDescription(\"Optional port for connect to redis\");\n\n    public static final ConfigOption<String> HOSTNAME = ConfigOptions\n            .key(\"hostname\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional host for connect to redis\");\n\n    public static final ConfigOption<String> CLUSTERNODES = ConfigOptions\n            .key(\"cluster-nodes\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional nodes for connect to redis cluster\");\n\n    public static final ConfigOption<Integer> DATABASE = ConfigOptions\n            .key(\"database\")\n            .intType()\n            .defaultValue(0)\n            .withDescription(\"Optional database for connect to redis\");\n\n\n    public static final ConfigOption<String> COMMAND = ConfigOptions\n            .key(\"command\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional command for connect to redis\");\n\n    public static final ConfigOption<String> REDISMODE = ConfigOptions\n            .key(\"redis-mode\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional redis-mode for connect to redis\");\n\n    public static final ConfigOption<String> REDIS_MASTER_NAME = ConfigOptions\n            .key(\"master.name\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional master.name for connect to redis sentinels\");\n\n    public static final ConfigOption<String> SENTINELS_INFO = ConfigOptions\n            .key(\"sentinels.info\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional sentinels.info for connect to redis sentinels\");\n\n    public static final ConfigOption<String> SENTINELS_PASSWORD = ConfigOptions\n            .key(\"sentinels.password\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional sentinels.password for connect to redis sentinels\");\n\n    public static final ConfigOption<String> KEY_COLUMN = ConfigOptions\n            .key(\"key-column\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional key-column for insert to redis\");\n\n    public static final ConfigOption<String> VALUE_COLUMN = ConfigOptions\n            .key(\"value-column\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional value_column for insert to redis\");\n\n\n    public static final ConfigOption<String> FIELD_COLUMN = ConfigOptions\n            .key(\"field-column\")\n            .stringType()\n            .noDefaultValue()\n            .withDescription(\"Optional field_column for insert to redis\");\n\n\n    public static final ConfigOption<Boolean> PUT_IF_ABSENT = ConfigOptions\n            .key(\"put-if-absent\")\n            .booleanType()\n            .defaultValue(false)\n            .withDescription(\"Optional put_if_absent for insert to redis\");\n\n    public static final ConfigOption<Boolean> LOOKUP_ASYNC =\n            ConfigOptions.key(\"lookup.async\")\n                    .booleanType()\n                    .defaultValue(false)\n                    .withDescription(\"whether to set async lookup.\");\n\n    public static final ConfigOption<Long> LOOKUP_CACHE_MAX_ROWS =\n            ConfigOptions.key(\"lookup.cache.max-rows\")\n                    .longType()\n                    .defaultValue(-1L)\n                    .withDescription(\n                            \"the max number of rows of lookup cache, over this value, the oldest rows will \"\n                                    + \"be eliminated. \\\"cache.max-rows\\\" and \\\"cache.ttl\\\" options must all be \"\n                                    + \"specified if any of them is \"\n                                    + \"specified. Cache is not enabled as default.\");\n\n    public static final ConfigOption<Duration> LOOKUP_CACHE_TTL =\n            ConfigOptions.key(\"lookup.cache.ttl\")\n                    .durationType()\n                    .defaultValue(Duration.ofSeconds(0))\n                    .withDescription(\"the cache time to live.\");\n\n    public static final ConfigOption<Integer> LOOKUP_MAX_RETRIES =\n            ConfigOptions.key(\"lookup.max-retries\")\n                    .intType()\n                    .defaultValue(3)\n                    .withDescription(\"the max retry times if lookup database failed.\");\n\n    public static RedisLookupOptions getRedisLookupOptions(ReadableConfig tableOptions) {\n        return (RedisLookupOptions) RedisLookupOptions\n                .builder()\n                .setLookupAsync(tableOptions.get(LOOKUP_ASYNC))\n                .setMaxRetryTimes(tableOptions.get(LOOKUP_MAX_RETRIES))\n                .setCacheExpireMs(tableOptions.get(LOOKUP_CACHE_TTL).toMillis())\n                .setCacheMaxSize(tableOptions.get(LOOKUP_CACHE_MAX_ROWS))\n                .setHostname(tableOptions.get(HOSTNAME))\n                .setPort(tableOptions.get(PORT))\n                .build();\n    }\n\n    public static RedisWriteOptions getRedisWriteOptions(ReadableConfig tableOptions) {\n        return (RedisWriteOptions) RedisWriteOptions\n                .builder()\n                .setWriteTtl(tableOptions.get(WRITE_TTL))\n                .setWriteMode(tableOptions.get(WRITE_MODE))\n                .setHostname(tableOptions.get(HOSTNAME))\n                .setPort(tableOptions.get(PORT))\n                .build();\n    }\n\n    /**\n     * Creates an array of indices that determine which physical fields of the table schema to\n     * include in the value format.\n     *\n     * <p>See {@link #VALUE_FORMAT}, {@link #VALUE_FIELDS_INCLUDE}, and {@link #KEY_FIELDS_PREFIX}\n     * for more information.\n     */\n    public static int[] createValueFormatProjection(\n            DataType physicalDataType) {\n        final LogicalType physicalType = physicalDataType.getLogicalType();\n        Preconditions.checkArgument(\n                hasRoot(physicalType, LogicalTypeRoot.ROW), \"Row data type expected.\");\n        final int physicalFieldCount = LogicalTypeChecks.getFieldCount(physicalType);\n        final IntStream physicalFields = IntStream.range(0, physicalFieldCount);\n\n        return physicalFields.toArray();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/options/RedisWriteOptions.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.options;\n\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\n\n\npublic class RedisWriteOptions {\n\n    protected final String hostname;\n    protected final int port;\n\n    public String getHostname() {\n        return hostname;\n    }\n\n    public int getPort() {\n        return port;\n    }\n\n    private int writeTtl;\n\n    private final String writeMode;\n\n    private final boolean isBatchMode;\n\n    private final int batchSize;\n\n    public static final ConfigOption<Integer> WRITE_TTL = ConfigOptions\n            .key(\"write.ttl\")\n            .intType()\n            .defaultValue(24 * 3600)\n            .withDescription(\"Optional ttl for insert to redis\");\n\n    public static final ConfigOption<String> WRITE_MODE = ConfigOptions\n            .key(\"write.mode\")\n            .stringType()\n            .defaultValue(\"string\")\n            .withDescription(\"mode for insert to redis\");\n\n    public static final ConfigOption<Boolean> IS_BATCH_MODE = ConfigOptions\n            .key(\"is.batch.mode\")\n            .booleanType()\n            .defaultValue(false)\n            .withDescription(\"if is.batch.mode is ture, means it can cache records and hit redis using jedis pipeline.\");\n\n    public static final ConfigOption<Integer> BATCH_SIZE = ConfigOptions\n            .key(\"batch.size\")\n            .intType()\n            .defaultValue(30)\n            .withDescription(\"jedis pipeline batch size.\");\n\n    public RedisWriteOptions(int writeTtl, String hostname, int port, String writeMode, boolean isBatchMode, int batchSize) {\n        this.writeTtl = writeTtl;\n        this.hostname = hostname;\n        this.port = port;\n        this.writeMode = writeMode;\n        this.isBatchMode = isBatchMode;\n        this.batchSize = batchSize;\n    }\n\n    public int getWriteTtl() {\n        return writeTtl;\n    }\n\n    public static Builder builder() {\n        return new Builder();\n    }\n\n    public String getWriteMode() {\n        return writeMode;\n    }\n\n    public boolean isBatchMode() {\n        return isBatchMode;\n    }\n\n    public int getBatchSize() {\n        return batchSize;\n    }\n\n    /** Builder of {@link RedisWriteOptions}. */\n    public static class Builder {\n        private int writeTtl = 24 * 3600;\n\n        /** optional, max retry times for Redis connector. */\n        public Builder setWriteTtl(int writeTtl) {\n            this.writeTtl = writeTtl;\n            return this;\n        }\n\n        protected String hostname = \"localhost\";\n\n        protected int port = 6379;\n\n        private String writeMode = \"string\";\n\n        private boolean isBatchMode = false;\n\n        private int batchSize = 30;\n\n        /**\n         * optional, lookup cache max size, over this value, the old data will be eliminated.\n         */\n        public Builder setHostname(String hostname) {\n            this.hostname = hostname;\n            return this;\n        }\n\n        /**\n         * optional, lookup cache expire mills, over this time, the old data will expire.\n         */\n        public Builder setPort(int port) {\n            this.port = port;\n            return this;\n        }\n\n        public Builder setWriteMode(String writeMode) {\n            this.writeMode = writeMode;\n            return this;\n        }\n\n        public RedisWriteOptions build() {\n            return new RedisWriteOptions(writeTtl, hostname, port, writeMode, isBatchMode, batchSize);\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v1/RedisDynamicTableFactory.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.v1;\n\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.HOSTNAME;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.PORT;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.api.TableSchema;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.sink.DynamicTableSink;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableSinkFactory;\nimport org.apache.flink.table.factories.DynamicTableSourceFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\n\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisOptions;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions;\nimport flink.examples.sql._03.source_sink.table.redis.v1.source.RedisDynamicTableSource;\n\n//import flink.examples.sql._03.source_sink.table.redis.v1.sink.RedisDynamicTableSink;\n\n\npublic class RedisDynamicTableFactory implements DynamicTableSourceFactory, DynamicTableSinkFactory {\n\n    @Override\n    public DynamicTableSink createDynamicTableSink(Context context) {\n\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n\n        final RedisWriteOptions redisWriteOptions = RedisOptions.getRedisWriteOptions(options);\n\n        TableSchema schema = context.getCatalogTable().getSchema();\n\n//        return new RedisDynamicTableSink(\n//                schema.toPhysicalRowDataType()\n//                , decodingFormat\n//                , redisWriteOptions);\n\n        return null;\n    }\n\n    @Override\n    public String factoryIdentifier() {\n        return \"redis\";\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(HOSTNAME);\n        options.add(PORT);\n        options.add(FactoryUtil.FORMAT); // use pre-defined option for format\n        return options;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        //        options.add(COMMAND);\n        //        options.add(KEY_COLUMN);\n        //        options.add(VALUE_COLUMN);\n        //        options.add(FIELD_COLUMN);\n        //        options.add(TTL);\n        return options;\n    }\n\n    @Override\n    public DynamicTableSource createDynamicTableSource(Context context) {\n\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n\n        final RedisLookupOptions redisLookupOptions = RedisOptions.getRedisLookupOptions(options);\n\n        TableSchema schema = context.getCatalogTable().getSchema();\n\n        return new RedisDynamicTableSource(\n                schema.toPhysicalRowDataType()\n                , decodingFormat\n                , redisLookupOptions);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v1/sink/RedisDynamicTableSink.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.v1.sink;//package flink.examples.sql._03.source_sink.table.redis.v1.sink;\n//\n//import javax.annotation.Nullable;\n//\n//import org.apache.flink.api.common.serialization.DeserializationSchema;\n//import org.apache.flink.streaming.connectors.redis.RedisSink;\n//import org.apache.flink.table.connector.ChangelogMode;\n//import org.apache.flink.table.connector.format.DecodingFormat;\n//import org.apache.flink.table.connector.sink.DynamicTableSink;\n//import org.apache.flink.table.connector.sink.SinkFunctionProvider;\n//import org.apache.flink.table.data.RowData;\n//import org.apache.flink.table.types.DataType;\n//import org.apache.flink.util.Preconditions;\n//\n//import flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions;\n//\n//public class RedisDynamicTableSink implements DynamicTableSink {\n//\n//    /**\n//     * Data type to configure the formats.\n//     */\n//    protected final DataType physicalDataType;\n//\n//    /**\n//     * Optional format for decoding keys from Kafka.\n//     */\n//    protected final @Nullable\n//    DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n//\n//    protected final RedisWriteOptions redisWriteOptions;\n//\n//    public RedisDynamicTableSink(DataType physicalDataType\n//            , DecodingFormat<DeserializationSchema<RowData>> decodingFormat\n//            , RedisWriteOptions redisWriteOptions) {\n//\n//        // Format attributes\n//        this.physicalDataType =\n//                Preconditions.checkNotNull(\n//                        physicalDataType, \"Physical data type must not be null.\");\n//        this.decodingFormat = decodingFormat;\n//        this.redisWriteOptions = redisWriteOptions;\n//    }\n//\n//\n//    @Override\n//    public ChangelogMode getChangelogMode(ChangelogMode requestedMode) {\n//        return null;\n//    }\n//\n//    @Override\n//    public SinkRuntimeProvider getSinkRuntimeProvider(Context context) {\n//        return SinkFunctionProvider.of(new RedisSink<RowData>(flinkJedisConfigBase, redisMapper));\n//    }\n//\n//    @Override\n//    public DynamicTableSink copy() {\n//        return new RedisDynamicTableSink(tableSchema, config);\n//    }\n//\n//    @Override\n//    public String asSummaryString() {\n//        return \"REDIS\";\n//    }\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v1/source/RedisDynamicTableSource.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.v1.source;\n\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.createValueFormatProjection;\n\nimport javax.annotation.Nullable;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.LookupTableSource;\nimport org.apache.flink.table.connector.source.TableFunctionProvider;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.utils.DataTypeUtils;\nimport org.apache.flink.util.Preconditions;\n\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\n\n\npublic class RedisDynamicTableSource implements LookupTableSource {\n\n    /**\n     * Data type to configure the formats.\n     */\n    protected final DataType physicalDataType;\n\n    /**\n     * Optional format for decoding keys from Kafka.\n     */\n    protected final @Nullable DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n\n    protected final RedisLookupOptions redisLookupOptions;\n\n    public RedisDynamicTableSource(\n            DataType physicalDataType\n            , DecodingFormat<DeserializationSchema<RowData>> decodingFormat\n            , RedisLookupOptions redisLookupOptions) {\n\n        // Format attributes\n        this.physicalDataType =\n                Preconditions.checkNotNull(\n                        physicalDataType, \"Physical data type must not be null.\");\n        this.decodingFormat = decodingFormat;\n        this.redisLookupOptions = redisLookupOptions;\n    }\n\n\n    @Override\n    public LookupRuntimeProvider getLookupRuntimeProvider(LookupContext context) {\n        return TableFunctionProvider.of(new RedisRowDataLookupFunction(\n                this.redisLookupOptions\n                , this.createDeserialization(context, this.decodingFormat, createValueFormatProjection(this.physicalDataType))));\n    }\n\n    private @Nullable DeserializationSchema<RowData> createDeserialization(\n            Context context,\n            @Nullable DecodingFormat<DeserializationSchema<RowData>> format,\n            int[] projection) {\n        if (format == null) {\n            return null;\n        }\n        DataType physicalFormatDataType =\n                DataTypeUtils.projectRow(this.physicalDataType, projection);\n        return format.createRuntimeDecoder(context, physicalFormatDataType);\n    }\n\n    @Override\n    public DynamicTableSource copy() {\n        return null;\n    }\n\n    @Override\n    public String asSummaryString() {\n        return null;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v1/source/RedisRowDataLookupFunction.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage flink.examples.sql._03.source_sink.table.redis.v1.source;\n\nimport java.io.IOException;\nimport java.util.concurrent.TimeUnit;\nimport java.util.function.Consumer;\n\nimport org.apache.flink.annotation.Internal;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.metrics.Gauge;\nimport org.apache.flink.shaded.guava18.com.google.common.cache.Cache;\nimport org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.functions.FunctionContext;\nimport org.apache.flink.table.functions.TableFunction;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport com.google.common.base.Joiner;\n\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\nimport redis.clients.jedis.Jedis;\n\n/**\n * The RedisRowDataLookupFunction is a standard user-defined table function, it can be used in\n * tableAPI and also useful for temporal table join plan in SQL. It looks up the result as {@link\n * RowData}.\n */\n@Internal\npublic class RedisRowDataLookupFunction extends TableFunction<RowData> {\n\n    private static final Logger LOG = LoggerFactory.getLogger(RedisRowDataLookupFunction.class);\n    private static final long serialVersionUID = 1L;\n\n    private transient Jedis jedis;\n\n    private final String hostname;\n    private final int port;\n\n    private final long cacheMaxSize;\n    private final long cacheExpireMs;\n    private final int maxRetryTimes;\n    private transient Cache<Object, RowData> cache;\n    private final SerializationSchema<Object[]> keySerializationSchema;\n    private final DeserializationSchema<RowData> valueDeserializationSchema;\n\n    private transient Consumer<Object[]> evaler;\n\n    public RedisRowDataLookupFunction(\n            RedisLookupOptions lookupOptions\n            , DeserializationSchema<RowData> valueDeserializationSchema) {\n        this.hostname = lookupOptions.getHostname();\n        this.port = lookupOptions.getPort();\n        this.cacheMaxSize = lookupOptions.getCacheMaxSize();\n        this.cacheExpireMs = lookupOptions.getCacheExpireMs();\n        this.maxRetryTimes = lookupOptions.getMaxRetryTimes();\n        this.valueDeserializationSchema = valueDeserializationSchema;\n        this.keySerializationSchema = elements -> Joiner.on(\":\").join(elements).getBytes();\n    }\n\n    /**\n     * The invoke entry point of lookup function.\n     *\n     * @param objects the lookup key. Currently only support single rowkey.\n     */\n    public void eval(Object... objects) throws IOException {\n\n        for (int retry = 0; retry <= maxRetryTimes; retry++) {\n            try {\n                // fetch result\n                this.evaler.accept(objects);\n                break;\n            } catch (Exception e) {\n                LOG.error(String.format(\"HBase lookup error, retry times = %d\", retry), e);\n                if (retry >= maxRetryTimes) {\n                    throw new RuntimeException(\"Execution of HBase lookup failed.\", e);\n                }\n                try {\n                    Thread.sleep(1000 * retry);\n                } catch (InterruptedException e1) {\n                    throw new RuntimeException(e1);\n                }\n            }\n        }\n    }\n\n\n    @Override\n    public void open(FunctionContext context) {\n        LOG.info(\"start open ...\");\n        this.jedis = new Jedis(this.hostname, this.port);\n\n        this.cache = cacheMaxSize <= 0 || cacheExpireMs <= 0 ? null : CacheBuilder.newBuilder()\n                .recordStats()\n                .expireAfterWrite(cacheExpireMs, TimeUnit.MILLISECONDS)\n                .maximumSize(cacheMaxSize)\n                .build();\n\n        if (cache != null) {\n            context.getMetricGroup()\n                    .gauge(\"lookupCacheHitRate\", (Gauge<Double>) () -> cache.stats().hitRate());\n\n\n            this.evaler = in -> {\n                RowData cacheRowData = cache.getIfPresent(in);\n                if (cacheRowData != null) {\n                    collect(cacheRowData);\n                } else {\n                    // fetch result\n                    byte[] key = this.keySerializationSchema.serialize(in);\n                    byte[] result = this.jedis.get(key);\n                    if (null != result && result.length > 0) {\n\n                        RowData rowData = null;\n                        try {\n                            rowData = this.valueDeserializationSchema.deserialize(result);\n                        } catch (IOException e) {\n                            throw new RuntimeException(e);\n                        }\n\n                        // parse and collect\n                        collect(rowData);\n                        cache.put(key, rowData);\n                    }\n                }\n            };\n\n        } else {\n            this.evaler = in -> {\n                // fetch result\n                byte[] key = this.keySerializationSchema.serialize(in);\n                byte[] result = this.jedis.get(key);\n\n                if (null != result && result.length > 0) {\n\n                    RowData rowData = null;\n                    try {\n                        rowData = this.valueDeserializationSchema.deserialize(result);\n                    } catch (IOException e) {\n                        throw new RuntimeException(e);\n                    }\n\n                    // parse and collect\n                    collect(rowData);\n                }\n            };\n        }\n\n        LOG.info(\"end open.\");\n    }\n\n    @Override\n    public void close() {\n        LOG.info(\"start close ...\");\n        if (null != jedis) {\n            this.jedis.close();\n            this.jedis = null;\n        }\n        LOG.info(\"end close.\");\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v2/RedisDynamicTableFactory.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.v2;\n\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.HOSTNAME;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.LOOKUP_CACHE_MAX_ROWS;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.LOOKUP_CACHE_TTL;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.LOOKUP_MAX_RETRIES;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.PORT;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions.BATCH_SIZE;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions.IS_BATCH_MODE;\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions.WRITE_MODE;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.api.TableSchema;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.sink.DynamicTableSink;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableSinkFactory;\nimport org.apache.flink.table.factories.DynamicTableSourceFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\n\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisOptions;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions;\nimport flink.examples.sql._03.source_sink.table.redis.v2.sink.RedisDynamicTableSink;\nimport flink.examples.sql._03.source_sink.table.redis.v2.source.RedisDynamicTableSource;\n\n\npublic class RedisDynamicTableFactory implements DynamicTableSourceFactory, DynamicTableSinkFactory {\n\n    @Override\n    public String factoryIdentifier() {\n        return \"redis\";\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(HOSTNAME);\n        options.add(PORT);\n        return options;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(FactoryUtil.FORMAT); // use pre-defined option for format\n        options.add(LOOKUP_CACHE_MAX_ROWS);\n        options.add(LOOKUP_CACHE_TTL);\n        options.add(LOOKUP_MAX_RETRIES);\n        options.add(WRITE_MODE);\n        options.add(IS_BATCH_MODE);\n        options.add(BATCH_SIZE);\n        return options;\n    }\n\n    @Override\n    public DynamicTableSource createDynamicTableSource(Context context) {\n\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n\n        final RedisLookupOptions redisLookupOptions = RedisOptions.getRedisLookupOptions(options);\n\n        TableSchema schema = context.getCatalogTable().getSchema();\n\n        Configuration c = (Configuration) context.getConfiguration();\n\n        boolean isDimBatchMode = c.getBoolean(\"is.dim.batch.mode\", false);\n\n        return new RedisDynamicTableSource(\n                schema.toPhysicalRowDataType()\n                , decodingFormat\n                , redisLookupOptions\n                , isDimBatchMode);\n    }\n\n    @Override\n    public DynamicTableSink createDynamicTableSink(Context context) {\n\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n//        final EncodingFormat<SerializationSchema<RowData>> encodingFormat = helper.discoverEncodingFormat(\n//                SerializationFormatFactory.class,\n//                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n\n        final RedisWriteOptions redisWriteOptions = RedisOptions.getRedisWriteOptions(options);\n\n        TableSchema schema = context.getCatalogTable().getSchema();\n\n        return new RedisDynamicTableSink(schema.toPhysicalRowDataType()\n                , redisWriteOptions);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v2/sink/RedisDynamicTableSink.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.v2.sink;\n\nimport javax.annotation.Nullable;\n\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.streaming.connectors.redis.RedisSink;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisConfigBase;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;\nimport org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.EncodingFormat;\nimport org.apache.flink.table.connector.sink.DynamicTableSink;\nimport org.apache.flink.table.connector.sink.SinkFunctionProvider;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.utils.DataTypeUtils;\nimport org.apache.flink.types.RowKind;\nimport org.apache.flink.util.Preconditions;\n\nimport flink.examples.sql._03.source_sink.table.redis.mapper.SetRedisMapper;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisWriteOptions;\n\n/**\n * https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sourcessinks/\n *\n * https://www.alibabacloud.com/help/zh/faq-detail/118038.htm?spm=a2c63.q38357.a3.16.48fa711fo1gVUd\n */\npublic class RedisDynamicTableSink implements DynamicTableSink {\n\n    /**\n     * Data type to configure the formats.\n     */\n    protected final DataType physicalDataType;\n\n    protected final RedisWriteOptions redisWriteOptions;\n\n    public RedisDynamicTableSink(\n            DataType physicalDataType\n            , RedisWriteOptions redisWriteOptions) {\n\n        // Format attributes\n        this.physicalDataType =\n                Preconditions.checkNotNull(\n                        physicalDataType, \"Physical data type must not be null.\");\n        this.redisWriteOptions = redisWriteOptions;\n    }\n\n    private @Nullable\n    SerializationSchema<RowData> createSerialization(\n            Context context,\n            @Nullable EncodingFormat<SerializationSchema<RowData>> format,\n            int[] projection) {\n        if (format == null) {\n            return null;\n        }\n        DataType physicalFormatDataType =\n                DataTypeUtils.projectRow(this.physicalDataType, projection);\n        return format.createRuntimeEncoder(context, physicalFormatDataType);\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode(ChangelogMode requestedMode) {\n        // UPSERT mode\n        ChangelogMode.Builder builder = ChangelogMode.newBuilder();\n        for (RowKind kind : requestedMode.getContainedKinds()) {\n            if (kind != RowKind.UPDATE_BEFORE) {\n                builder.addContainedKind(kind);\n            }\n        }\n        return builder.build();\n    }\n\n    @Override\n    public SinkRuntimeProvider getSinkRuntimeProvider(Context context) {\n        FlinkJedisConfigBase flinkJedisConfigBase = new FlinkJedisPoolConfig.Builder()\n                .setHost(this.redisWriteOptions.getHostname())\n                .setPort(this.redisWriteOptions.getPort())\n                .build();\n\n        RedisMapper<RowData> redisMapper = null;\n\n        switch (this.redisWriteOptions.getWriteMode()) {\n            case \"string\":\n                redisMapper = new SetRedisMapper();\n                break;\n            default:\n                throw new RuntimeException(\"其他类型 write mode 请自定义实现\");\n        }\n\n        return SinkFunctionProvider.of(new RedisSink<>(\n                flinkJedisConfigBase\n                , redisMapper));\n    }\n\n    @Override\n    public DynamicTableSink copy() {\n        return null;\n    }\n\n    @Override\n    public String asSummaryString() {\n        return \"redis\";\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v2/source/RedisDynamicTableSource.java",
    "content": "package flink.examples.sql._03.source_sink.table.redis.v2.source;\n\nimport static flink.examples.sql._03.source_sink.table.redis.options.RedisOptions.createValueFormatProjection;\n\nimport javax.annotation.Nullable;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisConfigBase;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.LookupTableSource;\nimport org.apache.flink.table.connector.source.TableFunctionProvider;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.utils.DataTypeUtils;\nimport org.apache.flink.util.Preconditions;\n\nimport flink.examples.sql._03.source_sink.table.redis.mapper.LookupRedisMapper;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\n\n\npublic class RedisDynamicTableSource implements LookupTableSource {\n\n    /**\n     * Data type to configure the formats.\n     */\n    protected final DataType physicalDataType;\n\n    /**\n     * Optional format for decoding keys from Kafka.\n     */\n    protected final @Nullable DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n\n    protected final RedisLookupOptions redisLookupOptions;\n\n    private final boolean isDimBatchMode;\n\n    public RedisDynamicTableSource(\n            DataType physicalDataType\n            , DecodingFormat<DeserializationSchema<RowData>> decodingFormat\n            , RedisLookupOptions redisLookupOptions\n            , boolean isDimBatchMode) {\n\n        // Format attributes\n        this.physicalDataType =\n                Preconditions.checkNotNull(\n                        physicalDataType, \"Physical data type must not be null.\");\n        this.decodingFormat = decodingFormat;\n        this.redisLookupOptions = redisLookupOptions;\n\n        this.isDimBatchMode = isDimBatchMode;\n    }\n\n\n    @Override\n    public LookupRuntimeProvider getLookupRuntimeProvider(LookupContext context) {\n\n        FlinkJedisConfigBase flinkJedisConfigBase = new FlinkJedisPoolConfig.Builder()\n                .setHost(this.redisLookupOptions.getHostname())\n                .setPort(this.redisLookupOptions.getPort())\n                .build();\n\n        LookupRedisMapper lookupRedisMapper = new LookupRedisMapper(\n                this.createDeserialization(context, this.decodingFormat, createValueFormatProjection(this.physicalDataType)));\n\n        if (isDimBatchMode) {\n            return TableFunctionProvider.of(new RedisRowDataBatchLookupFunction(\n                    flinkJedisConfigBase\n                    , lookupRedisMapper\n                    , this.redisLookupOptions));\n//            return TableFunctionProvider.of(new RedisRowDataLookupFunction(\n//                    flinkJedisConfigBase\n//                    , lookupRedisMapper\n//                    , this.redisLookupOptions));\n        } else {\n            return TableFunctionProvider.of(new RedisRowDataLookupFunction(\n                    flinkJedisConfigBase\n                    , lookupRedisMapper\n                    , this.redisLookupOptions));\n        }\n    }\n\n    private @Nullable DeserializationSchema<RowData> createDeserialization(\n            Context context,\n            @Nullable DecodingFormat<DeserializationSchema<RowData>> format,\n            int[] projection) {\n        if (format == null) {\n            return null;\n        }\n        DataType physicalFormatDataType =\n                DataTypeUtils.projectRow(this.physicalDataType, projection);\n        return format.createRuntimeDecoder(context, physicalFormatDataType);\n    }\n\n    @Override\n    public DynamicTableSource copy() {\n        return null;\n    }\n\n    @Override\n    public String asSummaryString() {\n        return null;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v2/source/RedisRowDataBatchLookupFunction.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage flink.examples.sql._03.source_sink.table.redis.v2.source;\n\nimport java.io.IOException;\nimport java.util.List;\nimport java.util.concurrent.TimeUnit;\nimport java.util.function.Consumer;\nimport java.util.stream.Collectors;\n\nimport org.apache.flink.annotation.Internal;\nimport org.apache.flink.metrics.Gauge;\nimport org.apache.flink.shaded.guava18.com.google.common.cache.Cache;\nimport org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisConfigBase;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.data.binary.BinaryStringData;\nimport org.apache.flink.table.functions.FunctionContext;\nimport org.apache.flink.table.functions.TableFunction;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport flink.examples.sql._03.source_sink.table.redis.container.RedisCommandsContainer;\nimport flink.examples.sql._03.source_sink.table.redis.container.RedisCommandsContainerBuilder;\nimport flink.examples.sql._03.source_sink.table.redis.mapper.LookupRedisMapper;\nimport flink.examples.sql._03.source_sink.table.redis.mapper.RedisCommand;\nimport flink.examples.sql._03.source_sink.table.redis.mapper.RedisCommandDescription;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\n\n/**\n * The RedisRowDataLookupFunction is a standard user-defined table function, it can be used in\n * tableAPI and also useful for temporal table join plan in SQL. It looks up the result as {@link\n * RowData}.\n */\n@Internal\npublic class RedisRowDataBatchLookupFunction extends TableFunction<List<RowData>> {\n\n    private static final Logger LOG = LoggerFactory.getLogger(\n            RedisRowDataBatchLookupFunction.class);\n    private static final long serialVersionUID = 1L;\n\n    private String additionalKey;\n    private LookupRedisMapper lookupRedisMapper;\n    private RedisCommand redisCommand;\n\n    protected final RedisLookupOptions redisLookupOptions;\n\n    private FlinkJedisConfigBase flinkJedisConfigBase;\n    private RedisCommandsContainer redisCommandsContainer;\n\n    private final long cacheMaxSize;\n    private final long cacheExpireMs;\n    private final int maxRetryTimes;\n\n    private final boolean isBatchMode;\n\n    private final int batchSize;\n\n    private final int batchMinTriggerDelayMs;\n\n    private transient Cache<Object, RowData> cache;\n\n    private transient Consumer<Object[]> evaler;\n\n    private static final byte[] DEFAULT_JSON_BYTES = \"{}\".getBytes();\n\n    public RedisRowDataBatchLookupFunction(\n            FlinkJedisConfigBase flinkJedisConfigBase\n            , LookupRedisMapper lookupRedisMapper,\n            RedisLookupOptions redisLookupOptions) {\n\n        this.flinkJedisConfigBase = flinkJedisConfigBase;\n\n        this.lookupRedisMapper = lookupRedisMapper;\n        this.redisLookupOptions = redisLookupOptions;\n        RedisCommandDescription redisCommandDescription = lookupRedisMapper.getCommandDescription();\n        this.redisCommand = redisCommandDescription.getRedisCommand();\n        this.additionalKey = redisCommandDescription.getAdditionalKey();\n\n        this.cacheMaxSize = this.redisLookupOptions.getCacheMaxSize();\n        this.cacheExpireMs = this.redisLookupOptions.getCacheExpireMs();\n        this.maxRetryTimes = this.redisLookupOptions.getMaxRetryTimes();\n\n        this.isBatchMode = this.redisLookupOptions.isBatchMode();\n\n        this.batchSize = this.redisLookupOptions.getBatchSize();\n\n        this.batchMinTriggerDelayMs = this.redisLookupOptions.getBatchMinTriggerDelayMs();\n    }\n\n    /**\n     * The invoke entry point of lookup function.\n     *\n     * @param objects the lookup key. Currently only support single rowkey.\n     */\n    public void eval(Object... objects) throws IOException {\n\n        for (int retry = 0; retry <= maxRetryTimes; retry++) {\n            try {\n                // fetch result\n                this.evaler.accept(objects);\n                break;\n            } catch (Exception e) {\n                LOG.error(String.format(\"Redis lookup error, retry times = %d\", retry), e);\n                if (retry >= maxRetryTimes) {\n                    throw new RuntimeException(\"Execution of Redis lookup failed.\", e);\n                }\n                try {\n                    Thread.sleep(1000 * retry);\n                } catch (InterruptedException e1) {\n                    throw new RuntimeException(e1);\n                }\n            }\n        }\n    }\n\n\n    @Override\n    public void open(FunctionContext context) {\n        LOG.info(\"start open ...\");\n\n        try {\n            this.redisCommandsContainer =\n                    RedisCommandsContainerBuilder\n                            .build(this.flinkJedisConfigBase);\n            this.redisCommandsContainer.open();\n        } catch (Exception e) {\n            LOG.error(\"Redis has not been properly initialized: \", e);\n            throw new RuntimeException(e);\n        }\n\n        this.cache = cacheMaxSize <= 0 || cacheExpireMs <= 0 ? null : CacheBuilder.newBuilder()\n                .recordStats()\n                .expireAfterWrite(cacheExpireMs, TimeUnit.MILLISECONDS)\n                .maximumSize(cacheMaxSize)\n                .build();\n\n        if (cache != null) {\n            context.getMetricGroup()\n                    .gauge(\"lookupCacheHitRate\", (Gauge<Double>) () -> cache.stats().hitRate());\n\n            this.evaler = in -> {\n\n                List<Object> inner = (List<Object>) in[0];\n                List<byte[]> keys = inner\n                        .stream()\n                        .map(o -> {\n                            if (o instanceof BinaryStringData) {\n                                return ((BinaryStringData) o).getJavaObject().getBytes();\n                            } else {\n                                return String.valueOf(o).getBytes();\n                            }\n                        })\n                        .collect(Collectors.toList());\n                List<Object> value = null;\n                switch (redisCommand) {\n                    case GET:\n                        value = this.redisCommandsContainer.multiGet(keys);\n                        break;\n                    default:\n                        throw new IllegalArgumentException(\"Cannot process such data type: \" + redisCommand);\n                }\n                List<RowData> result = value\n                        .stream()\n                        .map(o -> {\n                            if (null == o) {\n                                return this.lookupRedisMapper.deserialize(DEFAULT_JSON_BYTES);\n                            } else {\n                                return this.lookupRedisMapper.deserialize((byte[]) o);\n                            }\n                        })\n                        .collect(Collectors.toList());\n\n                collect(result);\n            };\n\n            //            this.evaler = in -> {\n            //                RowData cacheRowData = cache.getIfPresent(in);\n            //                if (cacheRowData != null) {\n            ////                    collect(cacheRowData);\n            //                } else {\n            //                    // fetch result\n            //                    byte[] key = lookupRedisMapper.serialize(in);\n            //\n            //                    byte[] value = null;\n            //\n            //                    switch (redisCommand) {\n            //                        case GET:\n            //                            value = this.redisCommandsContainer.get(key);\n            //                            break;\n            //                        case HGET:\n            //                            value = this.redisCommandsContainer.hget(key, this.additionalKey.getBytes());\n            //                            break;\n            //                        default:\n            //                            throw new IllegalArgumentException(\"Cannot process such data type: \" +\n            //                            redisCommand);\n            //                    }\n            //\n            //                    RowData rowData = this.lookupRedisMapper.deserialize(value);\n            //\n            //                    collect(rowData);\n            //\n            //                    if (null != rowData) {\n            //                        cache.put(key, rowData);\n            //                    }\n            //                }\n            //            };\n\n        } else {\n            this.evaler = in -> {\n\n                List<Object[]> inner = (List<Object[]>) in[0];\n\n                List<byte[]> keys = inner\n                        .stream()\n                        .map(lookupRedisMapper::serialize)\n                        .collect(Collectors.toList());\n\n                List<Object> value = null;\n\n                switch (redisCommand) {\n                    case GET:\n                        value = this.redisCommandsContainer.multiGet(keys);\n                        break;\n                    default:\n                        throw new IllegalArgumentException(\"Cannot process such data type: \" + redisCommand);\n                }\n\n                List<RowData> result = value\n                        .stream()\n                        .map(o -> this.lookupRedisMapper.deserialize((byte[]) o))\n                        .collect(Collectors.toList());\n\n                collect(result);\n            };\n        }\n\n        LOG.info(\"end open.\");\n    }\n\n    @Override\n    public void close() {\n        LOG.info(\"start close ...\");\n        if (redisCommandsContainer != null) {\n            try {\n                redisCommandsContainer.close();\n            } catch (IOException e) {\n                throw new RuntimeException(e);\n            }\n        }\n        LOG.info(\"end close.\");\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/redis/v2/source/RedisRowDataLookupFunction.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage flink.examples.sql._03.source_sink.table.redis.v2.source;\n\nimport java.io.IOException;\nimport java.util.concurrent.TimeUnit;\nimport java.util.function.Consumer;\n\nimport org.apache.flink.annotation.Internal;\nimport org.apache.flink.metrics.Gauge;\nimport org.apache.flink.shaded.guava18.com.google.common.cache.Cache;\nimport org.apache.flink.shaded.guava18.com.google.common.cache.CacheBuilder;\nimport org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisConfigBase;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.functions.FunctionContext;\nimport org.apache.flink.table.functions.TableFunction;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport flink.examples.sql._03.source_sink.table.redis.container.RedisCommandsContainer;\nimport flink.examples.sql._03.source_sink.table.redis.container.RedisCommandsContainerBuilder;\nimport flink.examples.sql._03.source_sink.table.redis.mapper.LookupRedisMapper;\nimport flink.examples.sql._03.source_sink.table.redis.mapper.RedisCommand;\nimport flink.examples.sql._03.source_sink.table.redis.mapper.RedisCommandDescription;\nimport flink.examples.sql._03.source_sink.table.redis.options.RedisLookupOptions;\n\n/**\n * The RedisRowDataLookupFunction is a standard user-defined table function, it can be used in\n * tableAPI and also useful for temporal table join plan in SQL. It looks up the result as {@link\n * RowData}.\n */\n@Internal\npublic class RedisRowDataLookupFunction extends TableFunction<RowData> {\n\n    private static final Logger LOG = LoggerFactory.getLogger(\n            RedisRowDataLookupFunction.class);\n    private static final long serialVersionUID = 1L;\n\n    private String additionalKey;\n    private LookupRedisMapper lookupRedisMapper;\n    private RedisCommand redisCommand;\n\n    protected final RedisLookupOptions redisLookupOptions;\n\n    private FlinkJedisConfigBase flinkJedisConfigBase;\n    private RedisCommandsContainer redisCommandsContainer;\n\n    private final long cacheMaxSize;\n    private final long cacheExpireMs;\n    private final int maxRetryTimes;\n\n    private final boolean isBatchMode;\n\n    private final int batchSize;\n\n    private final int batchMinTriggerDelayMs;\n\n    private transient Cache<Object, RowData> cache;\n\n    private transient Consumer<Object[]> evaler;\n\n    public RedisRowDataLookupFunction(\n            FlinkJedisConfigBase flinkJedisConfigBase\n            , LookupRedisMapper lookupRedisMapper,\n            RedisLookupOptions redisLookupOptions) {\n\n        this.flinkJedisConfigBase = flinkJedisConfigBase;\n\n        this.lookupRedisMapper = lookupRedisMapper;\n        this.redisLookupOptions = redisLookupOptions;\n        RedisCommandDescription redisCommandDescription = lookupRedisMapper.getCommandDescription();\n        this.redisCommand = redisCommandDescription.getRedisCommand();\n        this.additionalKey = redisCommandDescription.getAdditionalKey();\n\n        this.cacheMaxSize = this.redisLookupOptions.getCacheMaxSize();\n        this.cacheExpireMs = this.redisLookupOptions.getCacheExpireMs();\n        this.maxRetryTimes = this.redisLookupOptions.getMaxRetryTimes();\n\n        this.isBatchMode = this.redisLookupOptions.isBatchMode();\n\n        this.batchSize = this.redisLookupOptions.getBatchSize();\n\n        this.batchMinTriggerDelayMs = this.redisLookupOptions.getBatchMinTriggerDelayMs();\n    }\n\n    /**\n     * The invoke entry point of lookup function.\n     *\n     * @param objects the lookup key. Currently only support single rowkey.\n     */\n    public void eval(Object... objects) throws IOException {\n\n        for (int retry = 0; retry <= maxRetryTimes; retry++) {\n            try {\n                // fetch result\n                this.evaler.accept(objects);\n                break;\n            } catch (Exception e) {\n                LOG.error(String.format(\"HBase lookup error, retry times = %d\", retry), e);\n                if (retry >= maxRetryTimes) {\n                    throw new RuntimeException(\"Execution of Redis lookup failed.\", e);\n                }\n                try {\n                    Thread.sleep(1000 * retry);\n                } catch (InterruptedException e1) {\n                    throw new RuntimeException(e1);\n                }\n            }\n        }\n    }\n\n\n    @Override\n    public void open(FunctionContext context) {\n        LOG.info(\"start open ...\");\n\n        try {\n            this.redisCommandsContainer =\n                    RedisCommandsContainerBuilder\n                            .build(this.flinkJedisConfigBase);\n            this.redisCommandsContainer.open();\n        } catch (Exception e) {\n            LOG.error(\"Redis has not been properly initialized: \", e);\n            throw new RuntimeException(e);\n        }\n\n        this.cache = cacheMaxSize <= 0 || cacheExpireMs <= 0 ? null : CacheBuilder.newBuilder()\n                .recordStats()\n                .expireAfterWrite(cacheExpireMs, TimeUnit.MILLISECONDS)\n                .maximumSize(cacheMaxSize)\n                .build();\n\n        if (cache != null) {\n            context.getMetricGroup()\n                    .gauge(\"lookupCacheHitRate\", (Gauge<Double>) () -> cache.stats().hitRate());\n\n\n            this.evaler = in -> {\n                RowData cacheRowData = cache.getIfPresent(in);\n                if (cacheRowData != null) {\n//                    collect(cacheRowData);\n                } else {\n                    // fetch result\n                    byte[] key = lookupRedisMapper.serialize(in);\n\n                    byte[] value = null;\n\n                    switch (redisCommand) {\n                        case GET:\n                            value = this.redisCommandsContainer.get(key);\n                            break;\n                        case HGET:\n                            value = this.redisCommandsContainer.hget(key, this.additionalKey.getBytes());\n                            break;\n                        default:\n                            throw new IllegalArgumentException(\"Cannot process such data type: \" + redisCommand);\n                    }\n\n                    RowData rowData = this.lookupRedisMapper.deserialize(value);\n\n                    collect(rowData);\n\n                    if (null != rowData) {\n                        cache.put(key, rowData);\n                    }\n                }\n            };\n\n        } else {\n            this.evaler = in -> {\n                // fetch result\n                byte[] key = lookupRedisMapper.serialize(in);\n\n                byte[] value = null;\n\n                switch (redisCommand) {\n                    case GET:\n                        value = this.redisCommandsContainer.get(key);\n                        break;\n                    case HGET:\n                        value = this.redisCommandsContainer.hget(key, this.additionalKey.getBytes());\n                        break;\n                    default:\n                        throw new IllegalArgumentException(\"Cannot process such data type: \" + redisCommand);\n                }\n\n                RowData rowData = this.lookupRedisMapper.deserialize(value);\n\n                collect(rowData);\n            };\n        }\n\n        LOG.info(\"end open.\");\n    }\n\n    @Override\n    public void close() {\n        LOG.info(\"start close ...\");\n        if (redisCommandsContainer != null) {\n            try {\n                redisCommandsContainer.close();\n            } catch (IOException e) {\n                throw new RuntimeException(e);\n            }\n        }\n        LOG.info(\"end close.\");\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/socket/SocketDynamicTableFactory.java",
    "content": "package flink.examples.sql._03.source_sink.table.socket;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableSourceFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\nimport org.apache.flink.table.types.DataType;\n\n\npublic class SocketDynamicTableFactory implements DynamicTableSourceFactory {\n\n    // define all options statically\n    public static final ConfigOption<String> HOSTNAME = ConfigOptions.key(\"hostname\")\n            .stringType()\n            .noDefaultValue();\n\n    public static final ConfigOption<Integer> PORT = ConfigOptions.key(\"port\")\n            .intType()\n            .noDefaultValue();\n\n    public static final ConfigOption<Integer> BYTE_DELIMITER = ConfigOptions.key(\"byte-delimiter\")\n            .intType()\n            .defaultValue(10); // corresponds to '\\n'\n\n    @Override\n    public String factoryIdentifier() {\n        return \"socket\"; // used for matching to `connector = '...'`\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(HOSTNAME);\n        options.add(PORT);\n        options.add(FactoryUtil.FORMAT); // use pre-defined option for format\n        return options;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(BYTE_DELIMITER);\n        return options;\n    }\n\n    @Override\n    public DynamicTableSource createDynamicTableSource(Context context) {\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n        final String hostname = options.get(HOSTNAME);\n        final int port = options.get(PORT);\n        final byte byteDelimiter = (byte) (int) options.get(BYTE_DELIMITER);\n\n        // derive the produced data type (excluding computed columns) from the catalog table\n        final DataType producedDataType =\n                context.getCatalogTable().getResolvedSchema().toPhysicalRowDataType();\n\n        // create and return dynamic table source\n        return new SocketDynamicTableSource(hostname, port, byteDelimiter, decodingFormat, producedDataType);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/socket/SocketDynamicTableSource.java",
    "content": "package flink.examples.sql._03.source_sink.table.socket;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.ScanTableSource;\nimport org.apache.flink.table.connector.source.SourceFunctionProvider;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.DataType;\n\n\npublic class SocketDynamicTableSource implements ScanTableSource {\n\n    private final String hostname;\n    private final int port;\n    private final byte byteDelimiter;\n    private final DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n    private final DataType producedDataType;\n\n    public SocketDynamicTableSource(\n            String hostname,\n            int port,\n            byte byteDelimiter,\n            DecodingFormat<DeserializationSchema<RowData>> decodingFormat,\n            DataType producedDataType) {\n        this.hostname = hostname;\n        this.port = port;\n        this.byteDelimiter = byteDelimiter;\n        this.decodingFormat = decodingFormat;\n        this.producedDataType = producedDataType;\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode() {\n        // in our example the format decides about the changelog mode\n        // but it could also be the source itself\n        return decodingFormat.getChangelogMode();\n    }\n\n    @Override\n    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext runtimeProviderContext) {\n\n        // create runtime classes that are shipped to the cluster\n\n        final DeserializationSchema<RowData> deserializer = decodingFormat.createRuntimeDecoder(\n                runtimeProviderContext,\n                producedDataType);\n\n        final SourceFunction<RowData> sourceFunction = new SocketSourceFunction(\n                hostname,\n                port,\n                byteDelimiter,\n                deserializer);\n\n        return SourceFunctionProvider.of(sourceFunction, false);\n    }\n\n    @Override\n    public DynamicTableSource copy() {\n        return new SocketDynamicTableSource(hostname, port, byteDelimiter, decodingFormat, producedDataType);\n    }\n\n    @Override\n    public String asSummaryString() {\n        return \"Socket Table Source\";\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/socket/SocketSourceFunction.java",
    "content": "package flink.examples.sql._03.source_sink.table.socket;\n\nimport java.io.InputStream;\nimport java.net.InetSocketAddress;\nimport java.net.Socket;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.data.RowData;\n\n\npublic class SocketSourceFunction extends RichSourceFunction<RowData> implements ResultTypeQueryable<RowData> {\n\n    private final String hostname;\n    private final int port;\n    private final byte byteDelimiter;\n    private final DeserializationSchema<RowData> deserializer;\n\n    private volatile boolean isRunning = true;\n    private Socket currentSocket;\n\n    public SocketSourceFunction(String hostname, int port, byte byteDelimiter,\n            DeserializationSchema<RowData> deserializer) {\n        this.hostname = hostname;\n        this.port = port;\n        this.byteDelimiter = byteDelimiter;\n        this.deserializer = deserializer;\n    }\n\n    @Override\n    public TypeInformation<RowData> getProducedType() {\n        return deserializer.getProducedType();\n    }\n\n    @Override\n    public void open(Configuration parameters) throws Exception {\n        deserializer.open(null);\n\n        this.currentSocket = new Socket();\n\n        this.currentSocket.connect(new InetSocketAddress(hostname, port), 0);\n    }\n\n    @Override\n    public void run(SourceContext<RowData> ctx) throws Exception {\n\n        InputStream stream = this.currentSocket.getInputStream();\n\n        while (isRunning) {\n            // open and consume from socket\n\n            byte[] b = new byte[46];\n\n            stream.read(b, 0, 46);\n\n            RowData rowData = deserializer.deserialize(b);\n\n            ctx.collect(rowData);\n            Thread.sleep(1000);\n        }\n    }\n\n    @Override\n    public void cancel() {\n        isRunning = false;\n        try {\n            currentSocket.close();\n        } catch (Throwable t) {\n            // ignore\n        }\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/user_defined/UserDefinedDynamicTableFactory.java",
    "content": "package flink.examples.sql._03.source_sink.table.user_defined;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableSourceFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\nimport org.apache.flink.table.types.DataType;\n\n\npublic class UserDefinedDynamicTableFactory implements DynamicTableSourceFactory {\n\n    // define all options statically\n    public static final ConfigOption<String> CLASS_NAME = ConfigOptions.key(\"class.name\")\n            .stringType()\n            .noDefaultValue();\n\n    @Override\n    public String factoryIdentifier() {\n        return \"user_defined\"; // used for matching to `connector = '...'`\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(CLASS_NAME);\n        options.add(FactoryUtil.FORMAT); // use pre-defined option for format\n        return options;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        return options;\n    }\n\n    @Override\n    public DynamicTableSource createDynamicTableSource(Context context) {\n        // either implement your custom validation logic here ...\n        // or use the provided helper utility\n        final FactoryUtil.TableFactoryHelper helper = FactoryUtil.createTableFactoryHelper(this, context);\n\n        // discover a suitable decoding format\n        final DecodingFormat<DeserializationSchema<RowData>> decodingFormat = helper.discoverDecodingFormat(\n                DeserializationFormatFactory.class,\n                FactoryUtil.FORMAT);\n\n        // validate all options\n        helper.validate();\n\n        // get the validated options\n        final ReadableConfig options = helper.getOptions();\n        final String className = options.get(CLASS_NAME);\n\n        // derive the produced data type (excluding computed columns) from the catalog table\n        final DataType producedDataType =\n                context.getCatalogTable().getResolvedSchema().toPhysicalRowDataType();\n\n        // create and return dynamic table source\n        return new UserDefinedDynamicTableSource(className, decodingFormat, producedDataType);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/user_defined/UserDefinedDynamicTableSource.java",
    "content": "package flink.examples.sql._03.source_sink.table.user_defined;\n\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Optional;\n\nimport org.apache.flink.api.common.eventtime.WatermarkStrategy;\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.ScanTableSource;\nimport org.apache.flink.table.connector.source.SourceFunctionProvider;\nimport org.apache.flink.table.connector.source.abilities.SupportsFilterPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsLimitPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsProjectionPushDown;\nimport org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata;\nimport org.apache.flink.table.connector.source.abilities.SupportsSourceWatermark;\nimport org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.expressions.ResolvedExpression;\nimport org.apache.flink.table.types.DataType;\n\nimport com.google.common.collect.Lists;\n\nimport lombok.SneakyThrows;\n\n\npublic class UserDefinedDynamicTableSource implements ScanTableSource\n        , SupportsFilterPushDown // 过滤条件下推\n        , SupportsLimitPushDown // limit 条件下推\n        , SupportsPartitionPushDown //\n        , SupportsProjectionPushDown // select 下推\n        , SupportsReadingMetadata // 元数据\n        , SupportsWatermarkPushDown\n        , SupportsSourceWatermark {\n\n    private final String className;\n    private final DecodingFormat<DeserializationSchema<RowData>> decodingFormat;\n    private final DataType producedDataType;\n\n    public UserDefinedDynamicTableSource(\n            String className,\n            DecodingFormat<DeserializationSchema<RowData>> decodingFormat,\n            DataType producedDataType) {\n        this.className = className;\n        this.decodingFormat = decodingFormat;\n        this.producedDataType = producedDataType;\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode() {\n        // in our example the format decides about the changelog mode\n        // but it could also be the source itself\n        return decodingFormat.getChangelogMode();\n    }\n\n    @SneakyThrows\n    @Override\n    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext runtimeProviderContext) {\n\n        // create runtime classes that are shipped to the cluster\n\n        final DeserializationSchema<RowData> deserializer = decodingFormat.createRuntimeDecoder(\n                runtimeProviderContext,\n                producedDataType);\n\n        Map<String, DataType> readableMetadata = decodingFormat.listReadableMetadata();\n\n        Class<?> clazz = this.getClass().getClassLoader().loadClass(className);\n\n        RichSourceFunction<RowData> r = (RichSourceFunction<RowData>) clazz.getConstructors()[0].newInstance(deserializer);\n\n        return SourceFunctionProvider.of(r, false);\n    }\n\n    @Override\n    public DynamicTableSource copy() {\n        return new UserDefinedDynamicTableSource(className, decodingFormat, producedDataType);\n    }\n\n    @Override\n    public String asSummaryString() {\n        return \"Socket Table Source\";\n    }\n\n    @Override\n    public Result applyFilters(List<ResolvedExpression> filters) {\n        return Result.of(Lists.newLinkedList(), filters);\n    }\n\n    @Override\n    public void applyLimit(long limit) {\n        System.out.println(1);\n    }\n\n    @Override\n    public Optional<List<Map<String, String>>> listPartitions() {\n        return Optional.empty();\n    }\n\n    @Override\n    public void applyPartitions(List<Map<String, String>> remainingPartitions) {\n        System.out.println(1);\n    }\n\n    @Override\n    public boolean supportsNestedProjection() {\n        return false;\n    }\n\n    @Override\n    public void applyProjection(int[][] projectedFields) {\n        System.out.println(1);\n    }\n\n    @Override\n    public Map<String, DataType> listReadableMetadata() {\n        return new HashMap<String, DataType>() {{\n            put(\"flink_read_timestamp\", DataTypes.BIGINT());\n        }};\n    }\n\n    @Override\n    public void applyReadableMetadata(List<String> metadataKeys, DataType producedDataType) {\n        System.out.println(1);\n    }\n\n    @Override\n    public void applyWatermark(WatermarkStrategy<RowData> watermarkStrategy) {\n        System.out.println(1);\n    }\n\n    @Override\n    public void applySourceWatermark() {\n        System.out.println(1);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_03/source_sink/table/user_defined/UserDefinedSource.java",
    "content": "package flink.examples.sql._03.source_sink.table.user_defined;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.data.RowData;\n\nimport com.google.common.collect.ImmutableMap;\n\nimport flink.examples.JacksonUtils;\n\npublic class UserDefinedSource extends RichSourceFunction<RowData> {\n\n    private DeserializationSchema<RowData> dser;\n\n    private volatile boolean isCancel;\n\n    public UserDefinedSource(DeserializationSchema<RowData> dser) {\n        this.dser = dser;\n    }\n\n    @Override\n    public void run(SourceContext<RowData> ctx) throws Exception {\n        while (!this.isCancel) {\n            ctx.collect(this.dser.deserialize(\n                    JacksonUtils.bean2Json(ImmutableMap.of(\"user_id\", 1111L, \"name\", \"antigeneral\")).getBytes()\n            ));\n            Thread.sleep(1000);\n        }\n    }\n\n    @Override\n    public void cancel() {\n        this.isCancel = true;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_04/type/BlinkPlannerTest.java",
    "content": "package flink.examples.sql._04.type;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\n\npublic class BlinkPlannerTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(10);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                env.fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627254000000L),\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n                        .assignTimestampsAndWatermarks(\n                                new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n                                    @Override\n                                    public long extractTimestamp(Tuple3<String, Long, Long> element) {\n                                        return element.f2;\n                                    }\n                                });\n\n        tEnv.registerFunction(\"mod\", new Mod_UDF());\n\n        tEnv.registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        tEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp, rowtime.rowtime\");\n\n        String sql = \"SELECT\\n\"\n                + \"  count(1),\\n\"\n                + \"  cast(tumble_start(rowtime, INTERVAL '1' DAY) as string)\\n\"\n                + \"FROM\\n\"\n                + \"  source_db.source_table\\n\"\n                + \"GROUP BY\\n\"\n                + \"  tumble(rowtime, INTERVAL '1' DAY)\";\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toAppendStream(result, Row.class).print();\n\n        env.execute();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_04/type/JavaEnvTest.java",
    "content": "package flink.examples.sql._04.type;//package flink.examples.sql._04.type;\n//\n//\n//import java.util.Arrays;\n//\n//import org.apache.flink.api.java.tuple.Tuple3;\n//import org.apache.flink.streaming.api.datastream.DataStream;\n//import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\n//import org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\n//import org.apache.flink.streaming.api.windowing.time.Time;\n//import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n//import org.apache.flink.types.Row;\n//\n//public class JavaEnvTest {\n//\n//    public static void main(String[] args) throws Exception {\n//\n//\n//        StreamExecutionEnvironment sEnv = StreamExecutionEnvironment.getExecutionEnvironment();\n//        sEnv.setParallelism(1);\n//\n//        // create a TableEnvironment for streaming queries\n//        StreamTableEnvironment sTableEnv = StreamTableEnvironment.create(sEnv);\n//\n//        sTableEnv.registerFunction(\"table1\", new TableFunc0());\n//\n//        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n//                sEnv.fromCollection(Arrays.asList(\n//                        Tuple3.of(\"2\", 1L, 1627254000000L),\n//                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n//                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n//                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n//                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n//                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n//                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n//                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n//                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n//                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n//                        .assignTimestampsAndWatermarks(\n//                                new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n//                                    @Override\n//                                    public long extractTimestamp(Tuple3<String, Long, Long> element) {\n//                                        return element.f2;\n//                                    }\n//                                });\n//\n//        sTableEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n//                \"status, id, timestamp, rowtime.rowtime\");\n//\n//        String sql = \"select * \\n\"\n//                + \"from source_db.source_table as a\\n\"\n//                + \"LEFT JOIN LATERAL TABLE(table1(a.status)) AS DIM(status_new) ON TRUE\";\n//\n//        sTableEnv.toAppendStream(sTableEnv.sqlQuery(sql), Row.class).print();\n//\n//        sEnv.execute();\n//\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_04/type/OldPlannerTest.java",
    "content": "package flink.examples.sql._04.type;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\n\npublic class OldPlannerTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(10);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useOldPlanner()\n                .inStreamingMode()\n                .build();\n\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                env.fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627254000000L),\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n                        .assignTimestampsAndWatermarks(\n                                new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n                                    @Override\n                                    public long extractTimestamp(Tuple3<String, Long, Long> element) {\n                                        return element.f2;\n                                    }\n                                });\n\n        tEnv.registerFunction(\"mod\", new Mod_UDF());\n\n        tEnv.registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        tEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp, rowtime.rowtime\");\n\n        String sql = \"SELECT\\n\"\n                + \"  count(1),\\n\"\n                + \"  cast(tumble_start(rowtime, INTERVAL '1' DAY) as string)\\n\"\n                + \"FROM\\n\"\n                + \"  source_db.source_table\\n\"\n                + \"GROUP BY\\n\"\n                + \"  tumble(rowtime, INTERVAL '1' DAY)\";\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toAppendStream(result, Row.class).print();\n\n        env.execute();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/ProtobufFormatTest.java",
    "content": "package flink.examples.sql._05.format.formats;\n\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n/**\n * nc -lk 9999\n */\npublic class ProtobufFormatTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceTableSql = \"CREATE TABLE protobuf_source (\"\n                + \"  name STRING\\n\"\n                + \"  , names ARRAY<STRING>\\n\"\n                + \"  , si_map MAP<STRING, INT>\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'socket',\\n\"\n                + \"  'hostname' = 'localhost',\\n\"\n                + \"  'port' = '9999',\\n\"\n                + \"  'format' = 'protobuf',\\n\"\n                + \"  'protobuf.class-name' = 'flink.examples.sql._04.format.formats.protobuf.Test'\\n\"\n                + \")\";\n\n        String sinkTableSql = \"CREATE TABLE print_sink (\\n\"\n                + \"  name STRING\\n\"\n                + \"  , names ARRAY<STRING>\\n\"\n                + \"  , si_map MAP<STRING, INT>\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectSql = \"INSERT INTO print_sink\\n\"\n                + \"SELECT *\\n\"\n                + \"FROM protobuf_source\\n\";\n\n        tEnv.executeSql(sourceTableSql);\n        tEnv.executeSql(sinkTableSql);\n        tEnv.executeSql(selectSql);\n\n        env.execute();\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/SocketWriteTest.java",
    "content": "package flink.examples.sql._05.format.formats;\n\nimport java.io.IOException;\nimport java.net.ServerSocket;\nimport java.net.Socket;\nimport java.util.Map;\n\nimport com.google.common.collect.ImmutableMap;\n\nimport flink.examples.JacksonUtils;\nimport flink.examples.sql._05.format.formats.protobuf.Test;\n\n\npublic class SocketWriteTest {\n\n\n    public static void main(String[] args) throws IOException, InterruptedException {\n\n        ServerSocket serversocket = new ServerSocket(9999);\n\n        final Socket socket = serversocket.accept();\n\n        int i = 0;\n\n        while (true) {\n\n            Map<String, Integer> map = ImmutableMap.of(\"key1\", 1, \"地图\", i);\n\n            Test test = Test.newBuilder()\n                    .setName(\"姓名\" + i)\n                    .addNames(\"姓名列表\" + i)\n                    .putAllSiMap(map)\n                    .build();\n\n            System.out.println(JacksonUtils.bean2Json(test));\n            byte[] b = test.toByteArray();\n\n            socket.getOutputStream().write(b);\n\n            socket.getOutputStream().flush();\n            i++;\n\n            if (i == 10) {\n                break;\n            }\n\n            Thread.sleep(500);\n        }\n\n        socket.close();\n        serversocket.close();\n\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/csv/ChangelogCsvDeserializer.java",
    "content": "package flink.examples.sql._05.format.formats.csv;\n\nimport java.util.List;\nimport java.util.regex.Pattern;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.table.connector.RuntimeConverter.Context;\nimport org.apache.flink.table.connector.source.DynamicTableSource.DataStructureConverter;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.logical.LogicalType;\nimport org.apache.flink.table.types.logical.LogicalTypeRoot;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.types.RowKind;\n\n\npublic class ChangelogCsvDeserializer implements DeserializationSchema<RowData> {\n\n    private final List<LogicalType> parsingTypes;\n    private final DataStructureConverter converter;\n    private final TypeInformation<RowData> producedTypeInfo;\n    private final String columnDelimiter;\n\n    public ChangelogCsvDeserializer(\n            List<LogicalType> parsingTypes,\n            DataStructureConverter converter,\n            TypeInformation<RowData> producedTypeInfo,\n            String columnDelimiter) {\n        this.parsingTypes = parsingTypes;\n        this.converter = converter;\n        this.producedTypeInfo = producedTypeInfo;\n        this.columnDelimiter = columnDelimiter;\n    }\n\n    @Override\n    public TypeInformation<RowData> getProducedType() {\n        // return the type information required by Flink's core interfaces\n        return producedTypeInfo;\n    }\n\n    @Override\n    public void open(InitializationContext context) {\n        // converters must be open\n        converter.open(Context.create(ChangelogCsvDeserializer.class.getClassLoader()));\n    }\n\n    @Override\n    public RowData deserialize(byte[] message) {\n        // parse the columns including a changelog flag\n        final String[] columns = new String(message).split(Pattern.quote(columnDelimiter));\n        final RowKind kind = RowKind.valueOf(columns[0]);\n        final Row row = new Row(kind, parsingTypes.size());\n        for (int i = 0; i < parsingTypes.size(); i++) {\n            row.setField(i, parse(parsingTypes.get(i).getTypeRoot(), columns[i + 1]));\n        }\n        // convert to internal data structure\n        return (RowData) converter.toInternal(row);\n    }\n\n    private static Object parse(LogicalTypeRoot root, String value) {\n        switch (root) {\n            case INTEGER:\n                return Integer.parseInt(value);\n            case VARCHAR:\n                return value;\n            default:\n                throw new IllegalArgumentException();\n        }\n    }\n\n    @Override\n    public boolean isEndOfStream(RowData nextElement) {\n        return false;\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/csv/ChangelogCsvFormat.java",
    "content": "package flink.examples.sql._05.format.formats.csv;\n\nimport java.util.List;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.connector.source.DynamicTableSource.DataStructureConverter;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.logical.LogicalType;\nimport org.apache.flink.types.RowKind;\n\n\npublic class ChangelogCsvFormat implements DecodingFormat<DeserializationSchema<RowData>> {\n\n    private final String columnDelimiter;\n\n    public ChangelogCsvFormat(String columnDelimiter) {\n        this.columnDelimiter = columnDelimiter;\n    }\n\n    @Override\n    @SuppressWarnings(\"unchecked\")\n    public DeserializationSchema<RowData> createRuntimeDecoder(\n            DynamicTableSource.Context context,\n            DataType producedDataType) {\n        // create type information for the DeserializationSchema\n        final TypeInformation<RowData> producedTypeInfo = context.createTypeInformation(\n                producedDataType);\n\n        // most of the code in DeserializationSchema will not work on internal data structures\n        // create a converter for conversion at the end\n        final DataStructureConverter converter = context.createDataStructureConverter(producedDataType);\n\n        // use logical types during runtime for parsing\n        final List<LogicalType> parsingTypes = producedDataType.getLogicalType().getChildren();\n\n        // create runtime class\n        return new ChangelogCsvDeserializer(parsingTypes, converter, producedTypeInfo, columnDelimiter);\n    }\n\n    @Override\n    public ChangelogMode getChangelogMode() {\n        // define that this format can produce INSERT and DELETE rows\n        return ChangelogMode.newBuilder()\n                .addContainedKind(RowKind.INSERT)\n                .addContainedKind(RowKind.DELETE)\n                .build();\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/csv/ChangelogCsvFormatFactory.java",
    "content": "package flink.examples.sql._05.format.formats.csv;\n\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableFactory;\nimport org.apache.flink.table.factories.FactoryUtil;\n\n\npublic class ChangelogCsvFormatFactory implements DeserializationFormatFactory {\n\n    // define all options statically\n    public static final ConfigOption<String> COLUMN_DELIMITER = ConfigOptions.key(\"column-delimiter\")\n            .stringType()\n            .defaultValue(\"|\");\n\n    @Override\n    public String factoryIdentifier() {\n        return \"changelog-csv\";\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        return Collections.emptySet();\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n        final Set<ConfigOption<?>> options = new HashSet<>();\n        options.add(COLUMN_DELIMITER);\n        return options;\n    }\n\n    @Override\n    public DecodingFormat<DeserializationSchema<RowData>> createDecodingFormat(\n            DynamicTableFactory.Context context,\n            ReadableConfig formatOptions) {\n        // either implement your custom validation logic here ...\n        // or use the provided helper method\n        FactoryUtil.validateFactoryOptions(this, formatOptions);\n\n        // get the validated options\n        final String columnDelimiter = formatOptions.get(COLUMN_DELIMITER);\n\n        // create and return the format\n        return new ChangelogCsvFormat(columnDelimiter);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/descriptors/Protobuf.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.descriptors;\n\nimport java.util.Map;\n\nimport org.apache.flink.annotation.PublicEvolving;\nimport org.apache.flink.table.descriptors.DescriptorProperties;\nimport org.apache.flink.table.descriptors.FormatDescriptor;\nimport org.apache.flink.util.Preconditions;\n\nimport com.google.protobuf.Message;\n\n/**\n * Format descriptor for Apache Protobuf messages.\n */\n@PublicEvolving\npublic class Protobuf extends FormatDescriptor {\n\n    private Class<? extends Message> messageClass;\n    private String protobufDescriptorHttpGetUrl;\n\n    public Protobuf() {\n        super(ProtobufValidator.FORMAT_TYPE_VALUE, 1);\n    }\n\n    /**\n     * Sets the class of the Protobuf message.\n     *\n     * @param messageClass class of the Protobuf message.\n     */\n    public Protobuf messageClass(Class<? extends Message> messageClass) {\n        Preconditions.checkNotNull(messageClass);\n        this.messageClass = messageClass;\n        return this;\n    }\n\n    /**\n     * Sets the Protobuf for protobuf messages.\n     *\n     * @param protobufDescriptorHttpGetUrl protobuf descriptor http get url\n     */\n    public Protobuf protobufDescriptorHttpGetUrl(String protobufDescriptorHttpGetUrl) {\n        Preconditions.checkNotNull(protobufDescriptorHttpGetUrl);\n        this.protobufDescriptorHttpGetUrl = protobufDescriptorHttpGetUrl;\n        return this;\n    }\n\n    @Override\n    protected Map<String, String> toFormatProperties() {\n        final DescriptorProperties properties = new DescriptorProperties();\n\n        if (null != messageClass) {\n            properties.putClass(ProtobufValidator.FORMAT_MESSAGE_CLASS, messageClass);\n        }\n        if (null != protobufDescriptorHttpGetUrl) {\n            properties.putString(ProtobufValidator.FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL, protobufDescriptorHttpGetUrl);\n        }\n\n        return properties.asMap();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/descriptors/ProtobufValidator.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.descriptors;\n\nimport org.apache.flink.table.api.ValidationException;\nimport org.apache.flink.table.descriptors.DescriptorProperties;\nimport org.apache.flink.table.descriptors.FormatDescriptorValidator;\n\n/**\n * Validator for {@link Protobuf}.\n */\npublic class ProtobufValidator extends FormatDescriptorValidator {\n\n    public static final String FORMAT_TYPE_VALUE = \"protobuf\";\n    public static final String FORMAT_MESSAGE_CLASS = \"format.message-class\";\n    public static final String FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL = \"format.protobuf-descriptor-http-get-url\";\n\n    @Override\n    public void validate(DescriptorProperties properties) {\n        super.validate(properties);\n        final boolean hasMessageClass = properties.containsKey(FORMAT_MESSAGE_CLASS);\n        final boolean hasProtobufDescriptorHttpGetUrl = properties.containsKey(FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL);\n        if (hasMessageClass && hasProtobufDescriptorHttpGetUrl) {\n            throw new ValidationException(\"A definition of both a  Protobuf message class and Protobuf get descriptor http url  is not allowed.\");\n        } else if (hasMessageClass) {\n            properties.validateString(FORMAT_MESSAGE_CLASS, false, 1);\n        } else if (hasProtobufDescriptorHttpGetUrl) {\n            properties.validateString(FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL, false, 1);\n        } else {\n            throw new ValidationException(\"A definition of an Protobuf message class or Protobuf get descriptor http url is required.\");\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufDeserializationSchema.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row;\n\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.ObjectOutputStream;\n\nimport org.apache.flink.api.common.serialization.AbstractDeserializationSchema;\nimport org.apache.flink.util.Preconditions;\n\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.DynamicMessage;\nimport com.google.protobuf.GeneratedMessageV3;\nimport com.google.protobuf.Message;\n\nimport flink.examples.sql._05.format.formats.utils.MoreSuppliers;\n\n\n/**\n *  Deserialization schema that deserializes from Protobuf binary format.\n */\npublic class ProtobufDeserializationSchema<T extends Message> extends AbstractDeserializationSchema<T> {\n\n    private static final long serialVersionUID = 2098447220136965L;\n\n    /** Class to deserialize to. */\n    private Class<T> messageClazz;\n\n    /** DescriptorBytes in case of Message for serialization purpose. */\n    private byte[] descriptorBytes;\n\n    /** Descriptor generated from DescriptorBytes */\n    private transient Descriptors.Descriptor descriptor;\n\n    /** Default instance for this T message */\n    private transient T defaultInstance;\n\n    /**\n     * Creates {@link ProtobufDeserializationSchema} that produces {@link Message} using provided schema.\n     *\n     * @param descriptorBytes of produced messages\n     * @return deserialized message in form of {@link Message}\n     */\n    public static ProtobufDeserializationSchema<Message> forGenericMessage(byte[] descriptorBytes) {\n        return new ProtobufDeserializationSchema<>(Message.class, descriptorBytes);\n    }\n\n    /**\n     * Creates {@link ProtobufDeserializationSchema} that produces classes that were generated from protobuf schema.\n     *\n     * @param messageClazz class of message to be produced\n     * @return deserialized message\n     */\n    public static <T extends GeneratedMessageV3> ProtobufDeserializationSchema<T> forSpecificMessage(Class<T> messageClazz) {\n        return new ProtobufDeserializationSchema<>(messageClazz, null);\n    }\n\n    /**\n     * Creates a Protobuf deserialization schema.\n     *\n     * @param messageClazz class to which deserialize.\n     * @param descriptorBytes descriptor to which deserialize.\n     */\n    @SuppressWarnings(\"unchecked\")\n    ProtobufDeserializationSchema(Class<T> messageClazz, byte[] descriptorBytes) {\n        Preconditions.checkNotNull(messageClazz, \"Protobuf message class must not be null.\");\n        this.messageClazz = messageClazz;\n        this.descriptorBytes = descriptorBytes;\n        if (null != this.descriptorBytes) {\n            this.descriptor = ProtobufUtils.getDescriptor(descriptorBytes);\n            this.defaultInstance = (T) DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n        } else {\n            this.descriptor = ProtobufUtils.getDescriptor(messageClazz);\n            this.defaultInstance = ProtobufUtils.getDefaultInstance(messageClazz);\n        }\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    @Override\n    public T deserialize(byte[] bytes) throws IOException {\n        // read message\n        return (T) this.defaultInstance.newBuilderForType().mergeFrom(bytes);\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private void readObject(ObjectInputStream inputStream) throws ClassNotFoundException, IOException {\n        this.messageClazz = (Class<T>) inputStream.readObject();\n        this.descriptorBytes = MoreSuppliers.throwing(() -> ProtobufUtils.getBytes(inputStream));\n        if (null != this.descriptorBytes) {\n            this.descriptor = ProtobufUtils.getDescriptor(descriptorBytes);\n            this.defaultInstance = (T) DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n        } else {\n            this.descriptor = ProtobufUtils.getDescriptor(messageClazz);\n            this.defaultInstance = ProtobufUtils.getDefaultInstance(messageClazz);\n        }\n    }\n\n    private void writeObject(ObjectOutputStream outputStream) throws IOException {\n        outputStream.writeObject(this.messageClazz);\n        outputStream.write(this.descriptorBytes);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufRowDeserializationSchema.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row;\n\n\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.ObjectOutputStream;\nimport java.io.Serializable;\nimport java.math.BigDecimal;\nimport java.math.BigInteger;\nimport java.sql.Date;\nimport java.sql.Time;\nimport java.sql.Timestamp;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Objects;\nimport java.util.TimeZone;\nimport java.util.stream.Collectors;\n\nimport org.apache.flink.api.common.serialization.AbstractDeserializationSchema;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.common.typeinfo.Types;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.api.java.typeutils.MapTypeInfo;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Preconditions;\nimport org.joda.time.DateTime;\nimport org.joda.time.DateTimeFieldType;\nimport org.joda.time.LocalDate;\nimport org.joda.time.LocalTime;\n\nimport com.google.protobuf.ByteString;\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.Descriptors.FieldDescriptor;\nimport com.google.protobuf.DynamicMessage;\nimport com.google.protobuf.GeneratedMessageV3;\nimport com.google.protobuf.MapEntry;\nimport com.google.protobuf.Message;\n\nimport flink.examples.sql._05.format.formats.protobuf.row.typeutils.ProtobufSchemaConverter;\n\n\npublic class ProtobufRowDeserializationSchema extends AbstractDeserializationSchema<Row> {\n    /**\n     * Used for time conversions into SQL types.\n     */\n    private static final TimeZone LOCAL_TZ = TimeZone.getDefault();\n\n    /**\n     * Protobuf message class for deserialization. Might be null if message class is not available.\n     */\n    private Class<? extends Message> messageClazz;\n\n    /**\n     * Protobuf serialization descriptorBytes\n     */\n    private byte[] descriptorBytes;\n\n    /**\n     * Protobuf serialization descriptor.\n     */\n    private transient Descriptors.Descriptor descriptor;\n\n    /**\n     * Type information describing the result type.\n     */\n    private transient RowTypeInfo typeInfo;\n\n    /**\n     * Protobuf defaultInstance for descriptor\n     */\n    private transient Message defaultInstance;\n\n    private transient DeserializationRuntimeConverter deserializationRuntimeConverter;\n\n    @FunctionalInterface\n    interface DeserializationRuntimeConverter extends Serializable {\n        Object convert(Object object);\n    }\n\n    /**\n     * Creates a Protobuf deserialization descriptor for the given message class. Having the\n     * concrete Protobuf message class might improve performance.\n     *\n     * @param messageClazz Protobuf message class used to deserialize Protobuf's message to Flink's row\n     */\n    public ProtobufRowDeserializationSchema(Class<? extends GeneratedMessageV3> messageClazz) {\n        Preconditions.checkNotNull(messageClazz, \"Protobuf message class must not be null.\");\n        this.messageClazz = messageClazz;\n        this.descriptorBytes = null;\n        this.descriptor = ProtobufUtils.getDescriptor(messageClazz);\n        this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(messageClazz);\n        this.defaultInstance = ProtobufUtils.getDefaultInstance(messageClazz);\n        this.deserializationRuntimeConverter = this.createRowConverter(this.descriptor, this.typeInfo);\n    }\n\n    /**\n     * Creates a Protobuf deserialization descriptor for the given Protobuf descriptorBytes.\n     *\n     * @param descriptorBytes Protobuf descriptorBytes to deserialize Protobuf's message to Flink's row\n     */\n    public ProtobufRowDeserializationSchema(byte[] descriptorBytes) {\n        Preconditions.checkNotNull(descriptorBytes, \"Protobuf descriptorBytes must not be null.\");\n        this.messageClazz = null;\n        this.descriptorBytes = descriptorBytes;\n        this.descriptor = ProtobufUtils.getDescriptor(descriptorBytes);\n        this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.descriptor);\n        this.defaultInstance = DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n        this.deserializationRuntimeConverter = createRowConverter(this.descriptor, this.typeInfo);\n    }\n\n    @Override\n    public Row deserialize(byte[] bytes) throws IOException {\n        try {\n            Message message = this.defaultInstance\n                    .newBuilderForType()\n                    .mergeFrom(bytes)\n                    .build();\n            return (Row) this.deserializationRuntimeConverter.convert(message);\n        } catch (Exception e) {\n            throw new IOException(\"Failed to deserialize Protobuf message.\", e);\n        }\n    }\n\n    @Override\n    public TypeInformation<Row> getProducedType() {\n        return this.typeInfo;\n    }\n\n    // --------------------------------------------------------------------------------------------\n\n    private DeserializationRuntimeConverter createRowConverter(\n            Descriptors.Descriptor descriptor, RowTypeInfo rowTypeInfo) {\n        final FieldDescriptor[] fieldDescriptors =\n                descriptor.getFields().toArray(new FieldDescriptor[0]);\n        final TypeInformation<?>[] fieldTypeInfos = rowTypeInfo.getFieldTypes();\n\n        final int length = fieldDescriptors.length;\n\n        final DeserializationRuntimeConverter[] deserializationRuntimeConverters = new DeserializationRuntimeConverter[length];\n\n        for (int i = 0; i < length; i++) {\n            final FieldDescriptor fieldDescriptor = fieldDescriptors[i];\n            final TypeInformation<?> fieldTypeInfo = fieldTypeInfos[i];\n            deserializationRuntimeConverters[i] = createConverter(fieldDescriptor, fieldTypeInfo);\n        }\n\n        return (Object o) -> {\n            Message message = (Message) o;\n            final Row row = new Row(length);\n            for (int i = 0; i < length; i++) {\n                Object fieldO = message.getField(fieldDescriptors[i]);\n                row.setField(i, deserializationRuntimeConverters[i].convert(fieldO));\n            }\n            return row;\n        };\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private DeserializationRuntimeConverter createConverter(Descriptors.GenericDescriptor genericDescriptor, TypeInformation<?> info) {\n        // we perform the conversion based on descriptor information but enriched with pre-computed\n        // type information where useful (i.e., for list)\n\n        if (genericDescriptor instanceof Descriptors.Descriptor) {\n\n            return createRowConverter((Descriptors.Descriptor) genericDescriptor, (RowTypeInfo) info);\n\n        } else if (genericDescriptor instanceof FieldDescriptor) {\n\n            FieldDescriptor fieldDescriptor = ((FieldDescriptor) genericDescriptor);\n\n            // field\n            switch (fieldDescriptor.getType()) {\n                case INT32:\n                case FIXED32:\n                case UINT32:\n                case SFIXED32:\n                case SINT32:\n                case INT64:\n                case UINT64:\n                case FIXED64:\n                case SFIXED64:\n                case SINT64:\n                case DOUBLE:\n                case FLOAT:\n                case BOOL:\n                case STRING:\n                    if (info instanceof ListTypeInfo) {\n                        // list\n                        TypeInformation<?> elementTypeInfo = ((ListTypeInfo) info).getElementTypeInfo();\n\n                        return this.createListConverter(elementTypeInfo);\n                    } else {\n                        return this.createObjectConverter(info);\n                    }\n                case ENUM:\n                    if (info instanceof ListTypeInfo) {\n                        // list\n                        return (Object o) -> ((List) o)\n                                .stream()\n                                .map(Object::toString)\n                                .collect(Collectors.toList());\n                    } else {\n                        return Object::toString;\n                    }\n                case GROUP:\n                case MESSAGE:\n                    if (info instanceof ListTypeInfo) {\n                        // list\n                        TypeInformation<?> elementTypeInfo = ((ListTypeInfo) info).getElementTypeInfo();\n                        Descriptors.Descriptor elementDescriptor = fieldDescriptor.getMessageType();\n\n                        DeserializationRuntimeConverter elementConverter = this.createConverter(elementDescriptor, elementTypeInfo);\n\n                        return (Object o) -> ((List) o)\n                                .stream()\n                                .map(elementConverter::convert)\n                                .collect(Collectors.toList());\n\n                    } else if (info instanceof MapTypeInfo) {\n                        // map\n                        final MapTypeInfo<?, ?> mapTypeInfo = (MapTypeInfo<?, ?>) info;\n\n                        boolean isDynamicMessage = false;\n\n                        if (this.messageClazz == null) {\n                            isDynamicMessage = true;\n                        }\n\n                        // todo map's key only support string\n                        final DeserializationRuntimeConverter keyConverter = Object::toString;\n\n                        final FieldDescriptor keyFieldDescriptor =\n                                fieldDescriptor.getMessageType().getFields().get(0);\n\n                        final FieldDescriptor valueFieldDescriptor =\n                                fieldDescriptor.getMessageType().getFields().get(1);\n\n                        final TypeInformation<?> valueTypeInfo =\n                                mapTypeInfo.getValueTypeInfo();\n\n                        final DeserializationRuntimeConverter valueConverter =\n                                createConverter(valueFieldDescriptor, valueTypeInfo);\n\n                        if (isDynamicMessage) {\n\n                            return (Object o) -> {\n                                final List<DynamicMessage> dynamicMessages = (List<DynamicMessage>) o;\n\n                                final Map<String, Object> convertedMap = new HashMap<>(dynamicMessages.size());\n\n                                dynamicMessages.forEach((DynamicMessage dynamicMessage) -> {\n                                    convertedMap.put(\n                                            (String) keyConverter.convert(dynamicMessage.getField(keyFieldDescriptor))\n                                            , valueConverter.convert(dynamicMessage.getField(valueFieldDescriptor)));\n                                });\n\n                                return convertedMap;\n                            };\n\n                        } else {\n\n                            return (Object o) -> {\n                                final List<MapEntry> mapEntryList = (List<MapEntry>) o;\n                                final Map<String, Object> convertedMap = new HashMap<>(mapEntryList.size());\n                                mapEntryList.forEach((MapEntry message) -> {\n                                    convertedMap.put(\n                                            (String) keyConverter.convert(message.getKey())\n                                            , valueConverter.convert(message.getValue()));\n                                });\n\n                                return convertedMap;\n                            };\n                        }\n                    } else if (info instanceof RowTypeInfo) {\n                        // row\n                        return createRowConverter(((FieldDescriptor) genericDescriptor).getMessageType(), (RowTypeInfo) info);\n                    }\n                    throw new IllegalStateException(\"Message expected but was: \");\n                case BYTES:\n\n                    return (Object o) -> {\n                        final byte[] bytes = ((ByteString) o).toByteArray();\n                        if (Types.BIG_DEC == info) {\n                            return convertToDecimal(bytes);\n                        }\n                        return bytes;\n                    };\n            }\n        }\n\n        throw new IllegalArgumentException(\"Unsupported Protobuf type '\" + genericDescriptor.getName() + \"'.\");\n\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private DeserializationRuntimeConverter createListConverter(TypeInformation<?> info) {\n\n        DeserializationRuntimeConverter elementConverter;\n\n        if (Types.SQL_DATE == info) {\n\n            elementConverter = this::convertToDate;\n\n        } else if (Types.SQL_TIME == info) {\n\n            elementConverter = this::convertToTime;\n        } else {\n\n            elementConverter = (Object fieldO) -> (fieldO);\n        }\n\n        return (Object o) -> ((List) o)\n                .stream()\n                .map(elementConverter::convert)\n                .collect(Collectors.toList());\n    }\n\n    private DeserializationRuntimeConverter createObjectConverter(TypeInformation<?> info) {\n        if (Types.SQL_DATE == info) {\n            return this::convertToDate;\n        } else if (Types.SQL_TIME == info) {\n            return this::convertToTime;\n        } else {\n            return (Object o) -> o;\n        }\n    }\n\n    // --------------------------------------------------------------------------------------------\n\n    private BigDecimal convertToDecimal(byte[] bytes) {\n        return new BigDecimal(new BigInteger(bytes));\n    }\n\n    private Date convertToDate(Object object) {\n        final long millis;\n        if (object instanceof Integer) {\n            final Integer value = (Integer) object;\n            // adopted from Apache Calcite\n            final long t = (long) value * 86400000L;\n            millis = t - (long) LOCAL_TZ.getOffset(t);\n        } else {\n            // use 'provided' Joda time\n            final LocalDate value = (LocalDate) object;\n            millis = value.toDate().getTime();\n        }\n        return new Date(millis);\n    }\n\n    private Time convertToTime(Object object) {\n        final long millis;\n        if (object instanceof Integer) {\n            millis = (Integer) object;\n        } else {\n            // use 'provided' Joda time\n            final LocalTime value = (LocalTime) object;\n            millis = value.get(DateTimeFieldType.millisOfDay());\n        }\n        return new Time(millis - LOCAL_TZ.getOffset(millis));\n    }\n\n    private Timestamp convertToTimestamp(Object object) {\n        final long millis;\n        if (object instanceof Long) {\n            millis = (Long) object;\n        } else {\n            // use 'provided' Joda time\n            final DateTime value = (DateTime) object;\n            millis = value.toDate().getTime();\n        }\n        return new Timestamp(millis - LOCAL_TZ.getOffset(millis));\n    }\n\n    private void writeObject(ObjectOutputStream outputStream) throws IOException {\n        if (Objects.nonNull(this.messageClazz)) {\n            outputStream.writeObject(this.messageClazz);\n        } else {\n            outputStream.writeObject(this.descriptorBytes);\n        }\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private void readObject(ObjectInputStream inputStream) throws ClassNotFoundException, IOException {\n\n        Object o = inputStream.readObject();\n\n        if (o instanceof Class) {\n            this.messageClazz = (Class<? extends Message>) o;\n            this.descriptor = ProtobufUtils.getDescriptor(this.messageClazz);\n            this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.messageClazz);\n            this.defaultInstance = ProtobufUtils.getDefaultInstance(this.messageClazz);\n        } else {\n            this.descriptorBytes = (byte[]) o;\n            this.descriptor = ProtobufUtils.getDescriptor(this.descriptorBytes);\n            this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.descriptor);\n            this.defaultInstance = DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n        }\n        this.deserializationRuntimeConverter = this.createConverter(this.descriptor, this.typeInfo);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufRowFormatFactory.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row;\n\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.table.descriptors.DescriptorProperties;\nimport org.apache.flink.table.factories.DeserializationSchemaFactory;\nimport org.apache.flink.table.factories.SerializationSchemaFactory;\nimport org.apache.flink.table.factories.TableFormatFactoryBase;\nimport org.apache.flink.types.Row;\nimport org.apache.http.client.methods.CloseableHttpResponse;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.apache.http.impl.client.HttpClients;\n\nimport com.google.protobuf.GeneratedMessageV3;\n\nimport flink.examples.sql._05.format.formats.protobuf.descriptors.ProtobufValidator;\nimport flink.examples.sql._05.format.formats.utils.MoreRunnables;\nimport flink.examples.sql._05.format.formats.utils.MoreSuppliers;\n\n\n/**\n * Table format factory for providing configured instances of Protobuf-to-row {@link SerializationSchema}\n * and {@link DeserializationSchema}.\n */\npublic class ProtobufRowFormatFactory extends TableFormatFactoryBase<Row>\n        implements SerializationSchemaFactory<Row>, DeserializationSchemaFactory<Row> {\n\n    public ProtobufRowFormatFactory() {\n        super(ProtobufValidator.FORMAT_TYPE_VALUE, 1, false);\n    }\n\n    @Override\n    protected List<String> supportedFormatProperties() {\n        List<String> properties = new ArrayList<>(2);\n        properties.add(ProtobufValidator.FORMAT_TYPE_VALUE);\n        properties.add(ProtobufValidator.FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL);\n        return properties;\n    }\n\n    @Override\n    public DeserializationSchema<Row> createDeserializationSchema(Map<String, String> properties) {\n        final DescriptorProperties descriptorProperties = getValidatedProperties(properties);\n\n        // create and configure\n        if (descriptorProperties.containsKey(ProtobufValidator.FORMAT_MESSAGE_CLASS)) {\n            return new ProtobufRowDeserializationSchema(\n                    descriptorProperties.getClass(ProtobufValidator.FORMAT_MESSAGE_CLASS, GeneratedMessageV3.class));\n        } else {\n\n            String descriptorHttpGetUrl = descriptorProperties.getString(ProtobufValidator.FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL);\n\n            byte[] descriptorBytes = httpGetDescriptorBytes(descriptorHttpGetUrl);\n\n            return new ProtobufRowDeserializationSchema(descriptorBytes);\n        }\n    }\n\n    @Override\n    public SerializationSchema<Row> createSerializationSchema(Map<String, String> properties) {\n        final DescriptorProperties descriptorProperties = getValidatedProperties(properties);\n\n        // create and configure\n        if (descriptorProperties.containsKey(ProtobufValidator.FORMAT_MESSAGE_CLASS)) {\n            return new ProtobufRowSerializationSchema(\n                    descriptorProperties.getClass(ProtobufValidator.FORMAT_MESSAGE_CLASS, GeneratedMessageV3.class));\n        } else {\n\n            String descriptorHttpGetUrl = descriptorProperties.getString(ProtobufValidator.FORMAT_PROTOBUF_DESCRIPTOR_HTTP_GET_URL);\n\n            byte[] descriptorBytes = httpGetDescriptorBytes(descriptorHttpGetUrl);\n\n            return new ProtobufRowSerializationSchema(descriptorBytes);\n        }\n    }\n\n    public static byte[] httpGetDescriptorBytes(final String descriptorHttpGetUrl) {\n        byte[] descriptorBytes = null;\n\n        HttpGet get = new HttpGet(descriptorHttpGetUrl);\n\n        CloseableHttpClient httpClient = HttpClients.createDefault();\n        CloseableHttpResponse httpResponse = MoreSuppliers.throwing(() -> httpClient.execute(get));\n        if (200 == httpResponse.getStatusLine().getStatusCode()) {\n\n            long length = httpResponse.getEntity().getContentLength();\n            byte[] buffer = new byte[(int) length];\n\n            InputStream is = MoreSuppliers.throwing(() -> httpResponse.getEntity().getContent());\n            MoreSuppliers.throwing(() -> is.read(buffer));\n            descriptorBytes = buffer;\n\n            MoreRunnables.throwing(is::close);\n        }\n        MoreRunnables.throwing(httpResponse::close);\n        MoreRunnables.throwing(httpClient::close);\n\n        if (null != descriptorBytes && 0 != descriptorBytes.length) {\n            return descriptorBytes;\n        } else {\n            throw new RuntimeException(String.format(\"Try to get Protobuf descriptorBytes http response by [%s], find null or empty descriptorBytes, please check you descriptorBytes\", descriptorHttpGetUrl));\n        }\n    }\n\n    private static DescriptorProperties getValidatedProperties(Map<String, String> propertiesMap) {\n        DescriptorProperties descriptorProperties = new DescriptorProperties();\n        descriptorProperties.putProperties(propertiesMap);\n\n        // validate\n        (new ProtobufValidator()).validate(descriptorProperties);\n\n        return descriptorProperties;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufRowSerializationSchema.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row;\n\n\nimport static com.google.protobuf.Descriptors.FieldDescriptor.Type.ENUM;\n\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.ObjectOutputStream;\nimport java.io.Serializable;\nimport java.math.BigDecimal;\nimport java.sql.Date;\nimport java.sql.Time;\nimport java.sql.Timestamp;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.NoSuchElementException;\nimport java.util.Objects;\nimport java.util.TimeZone;\nimport java.util.stream.Collectors;\n\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ListTypeInfo;\nimport org.apache.flink.api.java.typeutils.MapTypeInfo;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Preconditions;\n\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.Descriptors.FieldDescriptor;\nimport com.google.protobuf.DynamicMessage;\nimport com.google.protobuf.GeneratedMessageV3;\nimport com.google.protobuf.MapEntry;\nimport com.google.protobuf.Message;\nimport com.google.protobuf.WireFormat;\n\nimport flink.examples.sql._05.format.formats.protobuf.row.typeutils.ProtobufSchemaConverter;\n\n\n/**\n * Serialization schema that serializes {@link Row} into Protobuf bytes.\n *\n * <p>Serializes objects that are represented in (nested) Flink rows. It support types that\n * are compatible with Flink's Table & SQL API.\n *\n * <p>Note: Changes in this class need to be kept in sync with the corresponding runtime\n * class {@link ProtobufRowDeserializationSchema} and schema converter {@link flink.formats.protobuf.typeutils.ProtobufSchemaConverter}.\n */\npublic class ProtobufRowSerializationSchema implements SerializationSchema<Row> {\n\n    private static final long serialVersionUID = 2098447220136965L;\n\n    /**\n     * Used for time conversions from SQL types.\n     */\n    private static final TimeZone LOCAL_TZ = TimeZone.getDefault();\n\n    /**\n     * Protobuf message class for serialization. Might be null if message class is not available.\n     */\n    private Class<? extends Message> messageClazz;\n\n    /**\n     * DescriptorBytes for deserialization.\n     */\n    private byte[] descriptorBytes;\n\n    /**\n     * Type information describing the result type.\n     */\n    private transient RowTypeInfo typeInfo;\n\n    private transient Descriptors.Descriptor descriptor;\n\n    private transient Message defaultInstance;\n\n    private transient SerializationRuntimeConverter serializationRuntimeConverter;\n\n    @FunctionalInterface\n    interface SerializationRuntimeConverter extends Serializable {\n        Object convert(Object object);\n    }\n\n    /**\n     * Creates an Protobuf serialization schema for the given message class.\n     *\n     * @param messageClazz Protobuf message class used to serialize Flink's row to Protobuf's message\n     */\n    public ProtobufRowSerializationSchema(Class<? extends GeneratedMessageV3> messageClazz) {\n        Preconditions.checkNotNull(messageClazz, \"Protobuf message class must not be null.\");\n        this.messageClazz = messageClazz;\n        this.descriptorBytes = null;\n        this.descriptor = ProtobufUtils.getDescriptor(this.messageClazz);\n        this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.messageClazz);\n        this.defaultInstance = ProtobufUtils.getDefaultInstance(this.messageClazz);\n        this.serializationRuntimeConverter = this.createRowConverter(this.descriptor, this.typeInfo);\n    }\n\n    /**\n     * Creates an Protobuf serialization schema for the given descriptorBytes.\n     *\n     * @param descriptorBytes descriptorBytes used to serialize Flink's row to Protobuf's message\n     */\n    public ProtobufRowSerializationSchema(byte[] descriptorBytes) {\n        Preconditions.checkNotNull(descriptorBytes, \"Protobuf message class must not be null.\");\n        this.messageClazz = null;\n        this.descriptorBytes = descriptorBytes;\n        this.descriptor = ProtobufUtils.getDescriptor(this.descriptorBytes);\n        this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(descriptorBytes);\n        this.defaultInstance = ProtobufUtils.getDefaultInstance(descriptorBytes);\n        this.serializationRuntimeConverter = this.createRowConverter(this.descriptor, this.typeInfo);\n    }\n\n    @Override\n    public byte[] serialize(Row row) {\n        try {\n            // convert to message\n            Message message = (Message) this.serializationRuntimeConverter.convert(row);\n            return message.toByteArray();\n        } catch (Throwable e) {\n            throw new RuntimeException(\"Failed to serialize row.\", e);\n        }\n    }\n\n    private SerializationRuntimeConverter createRowConverter(Descriptors.Descriptor descriptor, RowTypeInfo rowTypeInfo) {\n        final FieldDescriptor[] fieldDescriptors =\n                descriptor.getFields().toArray(new FieldDescriptor[0]);\n        final TypeInformation<?>[] fieldTypeInfos = rowTypeInfo.getFieldTypes();\n\n        final int length = fieldDescriptors.length;\n\n        final SerializationRuntimeConverter[] serializationRuntimeConverters = new SerializationRuntimeConverter[length];\n\n        for (int i = 0; i < length; ++i) {\n            final FieldDescriptor fieldDescriptor = fieldDescriptors[i];\n            final TypeInformation<?> fieldTypeInfo = fieldTypeInfos[i];\n            serializationRuntimeConverters[i] = createConverter(fieldDescriptor, fieldTypeInfo);\n        }\n\n        return (Object o) -> {\n            Row row = (Row) o;\n            final DynamicMessage.Builder dynamicMessageBuilder = DynamicMessage.newBuilder(descriptor);\n            for (int i = 0; i < length; i++) {\n                Object fieldO = row.getField(i);\n                dynamicMessageBuilder.setField(fieldDescriptors[i], serializationRuntimeConverters[i].convert(fieldO));\n            }\n            return dynamicMessageBuilder.build();\n        };\n    }\n\n    private SerializationRuntimeConverter createListConverter(TypeInformation<?> info) {\n        if (info instanceof ListTypeInfo) {\n            // list\n\n            return (Object o) -> {\n                List<Object> results = new ArrayList<>(((List<?>) o).size());\n                for (Object fieldO : ((List<?>) o)) {\n                    if (fieldO instanceof Date) {\n                        results.add(this.convertFromDate((Date) fieldO));\n                    } else if (fieldO instanceof Time) {\n                        results.add(this.convertFromTime((Time) fieldO));\n                    } else if (fieldO instanceof Timestamp) {\n                        results.add(convertFromTimestamp((Timestamp) fieldO));\n                    } else {\n                        results.add(fieldO);\n                    }\n                }\n                return results;\n            };\n        } else {\n\n            return (Object o) -> {\n                if (o instanceof Date) {\n                    return this.convertFromDate((Date) o);\n                } else if (o instanceof Time) {\n                    return this.convertFromTime((Time) o);\n                } else if (o instanceof Timestamp) {\n                    return convertFromTimestamp((Timestamp) o);\n                } else {\n                    return o;\n                }\n            };\n        }\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private SerializationRuntimeConverter createConverter(Descriptors.GenericDescriptor genericDescriptor, TypeInformation<?> info) {\n\n        if (genericDescriptor instanceof Descriptors.Descriptor) {\n\n            return createRowConverter((Descriptors.Descriptor) genericDescriptor, (RowTypeInfo) info);\n\n        } else if (genericDescriptor instanceof FieldDescriptor) {\n\n            FieldDescriptor fieldDescriptor = ((FieldDescriptor) genericDescriptor);\n\n            // field\n            switch (fieldDescriptor.getType()) {\n                case INT32:\n                case FIXED32:\n                case UINT32:\n                case SFIXED32:\n                case SINT32:\n                case INT64:\n                case UINT64:\n                case FIXED64:\n                case SFIXED64:\n                case SINT64:\n                case DOUBLE:\n                case FLOAT:\n                case BOOL:\n                    // check for logical type\n                    return createListConverter(info);\n                case STRING:\n                case ENUM:\n                    if (info instanceof ListTypeInfo) {\n                        // list\n                        return (Object o) -> new ArrayList<>((List<?>) o)\n                                .stream()\n                                .map((Object fieldO) -> convertFromEnum(fieldDescriptor, fieldO))\n                                .collect(Collectors.toList());\n                    } else {\n                        return (Object o) -> convertFromEnum(fieldDescriptor, o);\n                    }\n                case GROUP:\n                case MESSAGE:\n                    if (info instanceof ListTypeInfo) {\n                        // list\n                        TypeInformation<?> elementTypeInfo = ((ListTypeInfo) info).getElementTypeInfo();\n                        Descriptors.Descriptor elementDescriptor = fieldDescriptor.getMessageType();\n\n                        SerializationRuntimeConverter elementConverter = this.createConverter(elementDescriptor, elementTypeInfo);\n\n                        return (Object o) -> ((List) o)\n                                .stream()\n                                .map(elementConverter::convert)\n                                .collect(Collectors.toList());\n\n                    } else if (info instanceof MapTypeInfo) {\n                        // map\n\n                        final Descriptors.Descriptor messageType = fieldDescriptor.getMessageType();\n                        final WireFormat.FieldType keyFieldType = fieldDescriptor.getMessageType().getFields().get(0).getLiteType();\n                        final WireFormat.FieldType valueFieldType = fieldDescriptor.getMessageType().getFields().get(1).getLiteType();\n                        final FieldDescriptor valueFieldDescriptor = fieldDescriptor.getMessageType().getFields().get(1);\n                        final TypeInformation<?> valueTypeInfo = ((MapTypeInfo) info).getValueTypeInfo();\n\n                        SerializationRuntimeConverter valueConverter = createConverter(valueFieldDescriptor, valueTypeInfo);\n\n                        return (Object o) -> {\n                            final List<MapEntry<?, ?>> pbMapEntries = new ArrayList<>(((Map<?, ?>) o).size());\n                            for (Map.Entry<?, ?> mapEntry : ((Map<?, ?>) o).entrySet()) {\n                                pbMapEntries.add(MapEntry.newDefaultInstance(\n                                        messageType\n                                        , keyFieldType\n                                        , mapEntry.getKey()\n                                        , valueFieldType\n                                        , valueConverter.convert(mapEntry.getValue())));\n                            }\n                            return pbMapEntries;\n                        };\n                    } else if (info instanceof RowTypeInfo) {\n                        // row\n                        return createRowConverter(fieldDescriptor.getMessageType(), (RowTypeInfo) info);\n                    }\n                    throw new IllegalStateException(\"Message expected but was: \");\n                case BYTES:\n                    // check for logical type\n\n                    return (Object o) -> {\n                        if (o instanceof BigDecimal) {\n                            return convertFromDecimal((BigDecimal) o);\n                        }\n                        return o;\n                    };\n            }\n        }\n        throw new RuntimeException(\"error\");\n    }\n\n    private byte[] convertFromDecimal(BigDecimal decimal) {\n        // byte array must contain the two's-complement representation of the\n        // unscaled integer value in big-endian byte order\n        return decimal.unscaledValue().toByteArray();\n    }\n\n    private int convertFromDate(Date date) {\n        final long time = date.getTime();\n        final long converted = time + (long) LOCAL_TZ.getOffset(time);\n        return (int) (converted / 86400000L);\n    }\n\n    private int convertFromTime(Time date) {\n        final long time = date.getTime();\n        final long converted = time + (long) LOCAL_TZ.getOffset(time);\n        return (int) (converted % 86400000L);\n    }\n\n    private long convertFromTimestamp(Timestamp date) {\n        // adopted from Apache Calcite\n        final long time = date.getTime();\n        return time + (long) LOCAL_TZ.getOffset(time);\n    }\n\n    private Object convertFromEnum(FieldDescriptor fieldDescriptor, Object object) {\n        if (ENUM == fieldDescriptor.getType()) {\n\n            Descriptors.EnumDescriptor enumDescriptor = fieldDescriptor.getEnumType();\n\n            Descriptors.EnumValueDescriptor enumValue = null;\n\n            for (Descriptors.EnumValueDescriptor enumValueDescriptor : enumDescriptor.getValues()) {\n                if (enumValueDescriptor.toString().equals(object)) {\n                    enumValue = enumValueDescriptor;\n                }\n            }\n\n            if (null != enumValue) {\n                return enumValue;\n            } else {\n                throw new NoSuchElementException(String.format(fieldDescriptor.getFullName() + \" enumValues has not such element [%s]\", object));\n            }\n        } else {\n            return object.toString();\n        }\n    }\n\n    private void writeObject(ObjectOutputStream outputStream) throws IOException {\n        if (Objects.nonNull(this.messageClazz)) {\n            outputStream.writeObject(this.messageClazz);\n        } else {\n            outputStream.writeObject(this.descriptorBytes);\n        }\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private void readObject(ObjectInputStream inputStream) throws ClassNotFoundException, IOException {\n\n        Object o = inputStream.readObject();\n\n        if (o instanceof Class) {\n            this.messageClazz = (Class<? extends Message>) o;\n            this.descriptor = ProtobufUtils.getDescriptor(this.messageClazz);\n            this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.messageClazz);\n            this.defaultInstance = ProtobufUtils.getDefaultInstance(this.messageClazz);\n        } else {\n            this.descriptorBytes = (byte[]) o;\n            this.descriptor = ProtobufUtils.getDescriptor(this.descriptorBytes);\n            this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.descriptorBytes);\n            this.defaultInstance = DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n        }\n        this.serializationRuntimeConverter = this.createConverter(this.descriptor, this.typeInfo);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufSerializationSchema.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row;\n\nimport org.apache.flink.api.common.serialization.SerializationSchema;\n\nimport com.google.protobuf.Message;\n\npublic class ProtobufSerializationSchema<T extends Message> implements SerializationSchema<T> {\n\n    @Override\n    public byte[] serialize(T t) {\n        return t.toByteArray();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufUtils.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.InputStream;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.function.Function;\nimport java.util.stream.Collectors;\n\nimport com.google.protobuf.DescriptorProtos;\nimport com.google.protobuf.DescriptorProtos.FileDescriptorProto;\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.Descriptors.FileDescriptor;\nimport com.google.protobuf.DynamicMessage;\nimport com.google.protobuf.Message;\n\nimport flink.examples.sql._05.format.formats.utils.MoreSuppliers;\n\n\npublic class ProtobufUtils {\n\n    @SuppressWarnings(\"unchecked\")\n    public static <M extends Message> M getDefaultInstance(Class<M> messageClazz) {\n        try {\n            Method getDefaultInstanceMethod = messageClazz.getMethod(\"getDefaultInstance\");\n            return (M) getDefaultInstanceMethod.invoke(null);\n        } catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException var) {\n            throw new IllegalArgumentException(var);\n        }\n    }\n\n    public static Message getDefaultInstance(byte[] descriptorBytes) {\n        return DynamicMessage.newBuilder(getDescriptor(descriptorBytes)).getDefaultInstanceForType();\n    }\n\n    // example\n    public static Descriptors.Descriptor getDescriptor(byte[] descriptorBytes) {\n        DescriptorProtos.FileDescriptorSet fileDescriptorSet =\n                MoreSuppliers.throwing(() -> DescriptorProtos.FileDescriptorSet.parseFrom(descriptorBytes));\n        List<FileDescriptorProto> fileDescriptorProtos = fileDescriptorSet.getFileList();\n        Map<String, FileDescriptorProto> protoNameFileDescriptorProtoMapper = fileDescriptorProtos\n                .stream()\n                .collect(Collectors.toMap(FileDescriptorProto::getName, Function.identity()));\n        FileDescriptor fileDescriptor =\n                MoreSuppliers.throwing(() ->\n                        FileDescriptor.buildFrom(fileDescriptorProtos.get(0), new FileDescriptor[0]));\n        return fileDescriptor.getMessageTypes().get(0);\n    }\n\n    public static FileDescriptor getFileDescriptor(byte[] descriptorBytes) {\n        DescriptorProtos.FileDescriptorSet fileDescriptorSet =\n                MoreSuppliers.throwing(() -> DescriptorProtos.FileDescriptorSet.parseFrom(descriptorBytes));\n        List<FileDescriptorProto> fileDescriptorProtos = fileDescriptorSet.getFileList();\n        Map<String, FileDescriptorProto> protoNameFileDescriptorProtoMapper = fileDescriptorProtos\n                .stream()\n                .collect(Collectors.toMap(FileDescriptorProto::getName, Function.identity()));\n        return MoreSuppliers.throwing(() ->\n                FileDescriptor.buildFrom(fileDescriptorProtos.get(0), new FileDescriptor[0]));\n    }\n\n    public static Descriptors.Descriptor getDescriptor(Class<? extends Message> messageClazz) {\n        return getDefaultInstance(messageClazz).getDescriptorForType();\n    }\n\n\n    public static byte[] getBytes(InputStream is) throws Exception {\n        try (ByteArrayOutputStream bos = new ByteArrayOutputStream()) {\n            byte[] buffer = new byte[1024];\n            int len;\n            while ((len = is.read(buffer)) != -1) {\n                bos.write(buffer, 0, len);\n            }\n            is.close();\n            bos.flush();\n            return bos.toByteArray();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/row/typeutils/ProtobufSchemaConverter.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.row.typeutils;\n\nimport java.util.List;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.common.typeinfo.Types;\nimport org.apache.flink.api.java.typeutils.MapTypeInfo;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.table.types.logical.ArrayType;\nimport org.apache.flink.table.types.logical.BigIntType;\nimport org.apache.flink.table.types.logical.BinaryType;\nimport org.apache.flink.table.types.logical.BooleanType;\nimport org.apache.flink.table.types.logical.DoubleType;\nimport org.apache.flink.table.types.logical.FloatType;\nimport org.apache.flink.table.types.logical.IntType;\nimport org.apache.flink.table.types.logical.LogicalType;\nimport org.apache.flink.table.types.logical.MapType;\nimport org.apache.flink.table.types.logical.RowType;\nimport org.apache.flink.table.types.logical.RowType.RowField;\nimport org.apache.flink.table.types.logical.VarCharType;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Preconditions;\n\nimport com.google.common.collect.Lists;\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.Descriptors.FieldDescriptor;\nimport com.google.protobuf.Message;\n\nimport flink.examples.sql._05.format.formats.protobuf.row.ProtobufUtils;\n\n\n/**\n * Converts an Protobuf schema into Flink's type information. It uses {@link RowTypeInfo} for representing\n * objects and converts Protobuf types into types that are compatible with Flink's Table & SQL API.\n *\n * <p>Note: Changes in this class need to be kept in sync with the corresponding runtime\n */\npublic class ProtobufSchemaConverter {\n\n    private ProtobufSchemaConverter() {\n        // private\n    }\n\n    /**\n     * Converts an Protobuf class into a nested row structure with deterministic field order and data\n     * types that are compatible with Flink's Table & SQL API.\n     *\n     * @param protobufClass Protobuf message that contains schema information\n     * @return type information matching the schema\n     */\n    @SuppressWarnings(\"unchecked\")\n    public static <T extends Message> TypeInformation<Row> convertToTypeInfo(Class<T> protobufClass) {\n        Preconditions.checkNotNull(protobufClass, \"Protobuf specific message class must not be null.\");\n        // determine schema to retrieve deterministic field order\n        final Descriptors.Descriptor descriptor = ProtobufUtils.getDescriptor(protobufClass);\n        return (TypeInformation<Row>) convertToTypeInfo(descriptor);\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    public static <T extends Message> RowType convertToRowDataTypeInfo(Class<T> protobufClass) {\n        Preconditions.checkNotNull(protobufClass, \"Protobuf specific message class must not be null.\");\n        // determine schema to retrieve deterministic field order\n        final Descriptors.Descriptor descriptor = ProtobufUtils.getDescriptor(protobufClass);\n        return (RowType) convertToRowDataTypeInfo(descriptor);\n    }\n\n    /**\n     * Converts an Protobuf descriptorBytes into a nested row structure with deterministic field order and data\n     * types that are compatible with Flink's Table & SQL API.\n     *\n     * @param descriptorBytes Protobuf descriptorBytes\n     * @return type information matching the schema\n     */\n    @SuppressWarnings(\"unchecked\")\n    public static <T extends Message> TypeInformation<Row> convertToTypeInfo(byte[] descriptorBytes) {\n        Preconditions.checkNotNull(descriptorBytes, \"Protobuf descriptorBytes must not be null.\");\n        // determine schema to retrieve deterministic field order\n        final Descriptors.Descriptor descriptor = ProtobufUtils.getDescriptor(descriptorBytes);\n        return (TypeInformation<Row>) convertToTypeInfo(descriptor);\n    }\n\n    public static LogicalType convertToRowDataTypeInfo(Descriptors.GenericDescriptor genericDescriptor) {\n\n\n        if (genericDescriptor instanceof Descriptors.Descriptor) {\n\n            Descriptors.Descriptor descriptor = ((Descriptors.Descriptor) genericDescriptor);\n\n            List<FieldDescriptor> fieldDescriptors = descriptor.getFields();\n\n            int size = fieldDescriptors.size();\n\n            final LogicalType[] types = new LogicalType[size];\n            final String[] names = new String[size];\n            for (int i = 0; i < size; i++) {\n                final FieldDescriptor field = descriptor.getFields().get(i);\n                types[i] = convertToRowDataTypeInfo(field);\n                names[i] = field.getName();\n            }\n\n            if (descriptor.getOptions().getMapEntry()) {\n                // map\n\n                return new MapType(types[0], types[1]);\n            } else {\n                // message\n\n                List<RowField> rowFields = Lists.newLinkedList();\n\n                for (int i = 0; i < size; i++) {\n                    rowFields.add(new RowField(names[i], types[i]));\n                }\n\n                return new RowType(rowFields);\n            }\n\n        } else if (genericDescriptor instanceof FieldDescriptor) {\n\n            FieldDescriptor fieldDescriptor = ((FieldDescriptor) genericDescriptor);\n\n            LogicalType logicalType = null;\n\n            // field\n            switch (fieldDescriptor.getType()) {\n                case DOUBLE:\n                    logicalType = new DoubleType();\n                    break;\n                case FLOAT:\n                    logicalType = new FloatType();\n                    break;\n                case INT64:\n                case UINT64:\n                case FIXED64:\n                case SFIXED64:\n                case SINT64:\n                    logicalType = new BigIntType();\n                    break;\n                case INT32:\n                case FIXED32:\n                case UINT32:\n                case SFIXED32:\n                case SINT32:\n                    logicalType = new IntType();\n                    break;\n                case BOOL:\n                    logicalType = new BooleanType();\n                    break;\n                case STRING:\n                case ENUM:\n                    logicalType = new VarCharType(Integer.MAX_VALUE);\n                    break;\n                case GROUP:\n                case MESSAGE:\n                    logicalType = convertToRowDataTypeInfo(fieldDescriptor.getMessageType());\n                    break;\n                case BYTES:\n                    logicalType = new ArrayType(new BinaryType());\n                    break;\n            }\n\n            if (fieldDescriptor.isRepeated() && !(logicalType instanceof MapType)) {\n                return new ArrayType(logicalType);\n            } else {\n                return logicalType;\n            }\n\n\n        }\n\n        throw new IllegalArgumentException(\"Unsupported Protobuf type '\" + genericDescriptor.getName() + \"'.\");\n\n    }\n\n    public static TypeInformation<?> convertToTypeInfo(Descriptors.GenericDescriptor genericDescriptor) {\n\n\n        if (genericDescriptor instanceof Descriptors.Descriptor) {\n\n            Descriptors.Descriptor descriptor = ((Descriptors.Descriptor) genericDescriptor);\n\n            List<FieldDescriptor> fieldDescriptors = descriptor.getFields();\n\n            int size = fieldDescriptors.size();\n\n            final TypeInformation<?>[] types = new TypeInformation<?>[size];\n            final String[] names = new String[size];\n            for (int i = 0; i < size; i++) {\n                final FieldDescriptor field = descriptor.getFields().get(i);\n                types[i] = convertToTypeInfo(field);\n                names[i] = field.getName();\n            }\n\n            if (descriptor.getOptions().getMapEntry()) {\n                // map\n\n                return Types.MAP(types[0], types[1]);\n            } else {\n                // message\n\n                return Types.ROW_NAMED(names, types);\n            }\n\n        } else if (genericDescriptor instanceof FieldDescriptor) {\n\n            FieldDescriptor fieldDescriptor = ((FieldDescriptor) genericDescriptor);\n\n            TypeInformation<?> typeInformation = null;\n\n            // field\n            switch (fieldDescriptor.getType()) {\n                case DOUBLE:\n                    typeInformation = Types.DOUBLE;\n                    break;\n                case FLOAT:\n                    typeInformation = Types.FLOAT;\n                    break;\n                case INT64:\n                case UINT64:\n                case FIXED64:\n                case SFIXED64:\n                case SINT64:\n                    typeInformation = Types.LONG;\n                    break;\n                case INT32:\n                case FIXED32:\n                case UINT32:\n                case SFIXED32:\n                case SINT32:\n                    typeInformation = Types.INT;\n                    break;\n                case BOOL:\n                    typeInformation = Types.BOOLEAN;\n                    break;\n                case STRING:\n                case ENUM:\n                    typeInformation = Types.STRING;\n                    break;\n                case GROUP:\n                case MESSAGE:\n                    typeInformation = convertToTypeInfo(fieldDescriptor.getMessageType());\n                    break;\n                case BYTES:\n                    typeInformation = Types.PRIMITIVE_ARRAY(Types.BYTE);\n                    break;\n            }\n\n            if (fieldDescriptor.isRepeated() && !(typeInformation instanceof MapTypeInfo)) {\n                return Types.LIST(typeInformation);\n            } else {\n                return typeInformation;\n            }\n\n\n        }\n\n        throw new IllegalArgumentException(\"Unsupported Protobuf type '\" + genericDescriptor.getName() + \"'.\");\n\n\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufFormatFactory.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.rowdata;\n\nimport static flink.examples.sql._05.format.formats.protobuf.rowdata.ProtobufOptions.PROTOBUF_CLASS_NAME;\n\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ReadableConfig;\nimport org.apache.flink.table.connector.ChangelogMode;\nimport org.apache.flink.table.connector.format.DecodingFormat;\nimport org.apache.flink.table.connector.format.EncodingFormat;\nimport org.apache.flink.table.connector.source.DynamicTableSource;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.factories.DeserializationFormatFactory;\nimport org.apache.flink.table.factories.DynamicTableFactory.Context;\nimport org.apache.flink.table.factories.FactoryUtil;\nimport org.apache.flink.table.factories.SerializationFormatFactory;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.logical.RowType;\n\nimport com.google.protobuf.GeneratedMessageV3;\n\n\npublic class ProtobufFormatFactory implements DeserializationFormatFactory, SerializationFormatFactory {\n\n    public static final String IDENTIFIER = \"protobuf\";\n\n\n    @Override\n    public DecodingFormat<DeserializationSchema<RowData>> createDecodingFormat(Context context,\n            ReadableConfig formatOptions) {\n\n        FactoryUtil.validateFactoryOptions(this, formatOptions);\n\n        final String className = formatOptions.get(PROTOBUF_CLASS_NAME);\n\n        try {\n            Class<GeneratedMessageV3> protobufV3 =\n                    (Class<GeneratedMessageV3>) this.getClass().getClassLoader().loadClass(className);\n\n            return new DecodingFormat<DeserializationSchema<RowData>>() {\n                @Override\n                public DeserializationSchema<RowData> createRuntimeDecoder(DynamicTableSource.Context context,\n                        DataType physicalDataType) {\n                    final RowType rowType = (RowType) physicalDataType.getLogicalType();\n\n                    return new ProtobufRowDataDeserializationSchema(\n                            protobufV3\n                            , true\n                            , rowType);\n                }\n\n                @Override\n                public ChangelogMode getChangelogMode() {\n                    return ChangelogMode.insertOnly();\n                }\n            };\n        } catch (ClassNotFoundException e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n    @Override\n    public EncodingFormat<SerializationSchema<RowData>> createEncodingFormat(Context context,\n            ReadableConfig formatOptions) {\n        return null;\n    }\n\n    @Override\n    public String factoryIdentifier() {\n        return IDENTIFIER;\n    }\n\n    @Override\n    public Set<ConfigOption<?>> requiredOptions() {\n        return Collections.emptySet();\n    }\n\n    @Override\n    public Set<ConfigOption<?>> optionalOptions() {\n\n        Set<ConfigOption<?>> optionalOptions = new HashSet<>();\n\n        optionalOptions.add(PROTOBUF_CLASS_NAME);\n\n        return optionalOptions;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufOptions.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.rowdata;\n\nimport org.apache.flink.configuration.ConfigOption;\nimport org.apache.flink.configuration.ConfigOptions;\n\n\npublic class ProtobufOptions {\n\n    public static final ConfigOption<String> PROTOBUF_CLASS_NAME =\n            ConfigOptions.key(\"class-name\")\n                    .stringType()\n                    .noDefaultValue()\n                    .withDescription(\n                            \"Optional flag to specify whether to fail if a field is missing or not, false by default.\");\n\n    public static final ConfigOption<String> PROTOBUF_DESCRIPTOR_FILE =\n            ConfigOptions.key(\"descriptor-file\")\n                    .stringType()\n                    .noDefaultValue()\n                    .withDescription(\n                            \"Optional flag to skip fields and rows with parse errors instead of failing;\\n\"\n                                    + \"fields are set to null in case of errors, false by default.\");\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufRowDataDeserializationSchema.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.rowdata;\n\nimport static java.lang.String.format;\n\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.ObjectOutputStream;\nimport java.util.Objects;\n\nimport org.apache.flink.api.common.serialization.AbstractDeserializationSchema;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.types.logical.RowType;\nimport org.apache.flink.util.Preconditions;\n\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.GeneratedMessageV3;\nimport com.google.protobuf.Message;\n\nimport flink.examples.sql._05.format.formats.protobuf.row.ProtobufUtils;\nimport flink.examples.sql._05.format.formats.protobuf.row.typeutils.ProtobufSchemaConverter;\n\n\npublic class ProtobufRowDataDeserializationSchema extends AbstractDeserializationSchema<RowData> {\n\n    /**\n     * Protobuf message class for deserialization. Might be null if message class is not available.\n     */\n    private Class<? extends Message> messageClazz;\n\n    /**\n     * Protobuf serialization descriptorBytes\n     */\n    private byte[] descriptorBytes;\n\n    /**\n     * Protobuf serialization descriptor.\n     */\n    private transient Descriptors.Descriptor descriptor;\n\n    /**\n     * Type information describing the result type.\n     */\n    private transient RowType protobufOriginalRowType;\n\n    /** Flag indicating whether to ignore invalid fields/rows (default: throw an exception). */\n    private final boolean ignoreParseErrors;\n\n    /** TypeInformation of the produced {@link RowData}. */\n    private RowType expectedResultType;\n\n    /**\n     * Protobuf defaultInstance for descriptor\n     */\n    private transient Message defaultInstance;\n\n    private ProtobufToRowDataConverters.ProtobufToRowDataConverter runtimeConverter;\n\n    /**\n     * Creates a Protobuf deserialization descriptor for the given message class. Having the\n     * concrete Protobuf message class might improve performance.\n     *\n     * @param messageClazz Protobuf message class used to deserialize Protobuf's message to Flink's row\n     * @param ignoreParseErrors\n     */\n    public ProtobufRowDataDeserializationSchema(\n            Class<? extends GeneratedMessageV3> messageClazz\n            , boolean ignoreParseErrors\n            , RowType expectedResultType) {\n        this.ignoreParseErrors = ignoreParseErrors;\n        Preconditions.checkNotNull(messageClazz, \"Protobuf message class must not be null.\");\n        this.messageClazz = messageClazz;\n        this.descriptorBytes = null;\n        this.descriptor = ProtobufUtils.getDescriptor(messageClazz);\n        this.defaultInstance = ProtobufUtils.getDefaultInstance(messageClazz);\n\n        // protobuf 本身的 schema\n        this.protobufOriginalRowType = (RowType) ProtobufSchemaConverter.convertToRowDataTypeInfo(messageClazz);\n\n        this.expectedResultType = expectedResultType;\n\n        this.runtimeConverter = new ProtobufToRowDataConverters(false)\n                .createRowDataConverterByLogicalType(this.descriptor, this.expectedResultType);\n    }\n\n    /**\n     * Creates a Protobuf deserialization descriptor for the given Protobuf descriptorBytes.\n     *\n     * @param descriptorBytes Protobuf descriptorBytes to deserialize Protobuf's message to Flink's row\n     * @param ignoreParseErrors\n     */\n//    public ProtobufRowDataDeserializationSchema(\n//            byte[] descriptorBytes\n//            , boolean ignoreParseErrors\n//            , RowType expectedResultType) {\n//        this.ignoreParseErrors = ignoreParseErrors;\n//        Preconditions.checkNotNull(descriptorBytes, \"Protobuf descriptorBytes must not be null.\");\n//        this.messageClazz = null;\n//        this.descriptorBytes = descriptorBytes;\n//        this.descriptor = ProtobufUtils.getDescriptor(descriptorBytes);\n////        this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.descriptor);\n//        this.defaultInstance = DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n////        this.runtimeConverter = new ProtobufToRowDataConverters(true)\n////                .createRowDataConverter(this.descriptor, this.typeInfo, null);\n//\n//        this.expectedResultType = expectedResultType;\n//    }\n\n    @Override\n    public RowData deserialize(byte[] bytes) throws IOException {\n        if (bytes == null) {\n            return null;\n        }\n        try {\n\n            Message message = this.defaultInstance\n                    .newBuilderForType()\n                    .mergeFrom(bytes)\n                    .build();\n\n            return (RowData) runtimeConverter.convert(message);\n        } catch (Throwable t) {\n            if (ignoreParseErrors) {\n                return null;\n            }\n            throw new IOException(\n                    format(\"Failed to deserialize Protobuf '%s'.\", new String(bytes)), t);\n        }\n    }\n\n    private void writeObject(ObjectOutputStream outputStream) throws IOException {\n        if (Objects.nonNull(this.messageClazz)) {\n            outputStream.writeObject(this.messageClazz);\n            outputStream.writeObject(this.expectedResultType);\n        } else {\n            outputStream.writeObject(this.descriptorBytes);\n        }\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private void readObject(ObjectInputStream inputStream) throws ClassNotFoundException, IOException {\n\n        Object o = inputStream.readObject();\n\n        this.expectedResultType = (RowType) inputStream.readObject();\n\n        if (o instanceof Class) {\n            this.messageClazz = (Class<? extends Message>) o;\n            this.descriptor = ProtobufUtils.getDescriptor(messageClazz);\n            this.defaultInstance = ProtobufUtils.getDefaultInstance(messageClazz);\n\n            this.protobufOriginalRowType = (RowType) ProtobufSchemaConverter.convertToRowDataTypeInfo(messageClazz);\n\n            this.runtimeConverter = new ProtobufToRowDataConverters(false)\n                    .createRowDataConverterByDescriptor(this.descriptor, this.expectedResultType);\n        } else {\n//            this.descriptorBytes = (byte[]) o;\n//            this.descriptor = ProtobufUtils.getDescriptor(this.descriptorBytes);\n//            this.typeInfo = (RowTypeInfo) ProtobufSchemaConverter.convertToTypeInfo(this.descriptor);\n//            this.defaultInstance = DynamicMessage.newBuilder(this.descriptor).getDefaultInstanceForType();\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufRowDataSerializationSchema.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.rowdata;\n\nimport org.apache.flink.api.common.serialization.SerializationSchema;\nimport org.apache.flink.table.data.RowData;\n\n\npublic class ProtobufRowDataSerializationSchema implements SerializationSchema<RowData> {\n    @Override\n    public byte[] serialize(RowData element) {\n        return new byte[0];\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufToRowDataConverters.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.rowdata;\n\nimport java.io.Serializable;\nimport java.math.BigDecimal;\nimport java.math.BigInteger;\nimport java.sql.Date;\nimport java.sql.Time;\nimport java.sql.Timestamp;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.TimeZone;\n\nimport org.apache.flink.table.data.GenericArrayData;\nimport org.apache.flink.table.data.GenericMapData;\nimport org.apache.flink.table.data.GenericRowData;\nimport org.apache.flink.table.data.StringData;\nimport org.apache.flink.table.types.logical.ArrayType;\nimport org.apache.flink.table.types.logical.DateType;\nimport org.apache.flink.table.types.logical.DecimalType;\nimport org.apache.flink.table.types.logical.LogicalType;\nimport org.apache.flink.table.types.logical.MapType;\nimport org.apache.flink.table.types.logical.RowType;\nimport org.apache.flink.table.types.logical.TimeType;\nimport org.apache.flink.table.types.logical.VarCharType;\nimport org.joda.time.DateTime;\nimport org.joda.time.DateTimeFieldType;\nimport org.joda.time.LocalDate;\nimport org.joda.time.LocalTime;\n\nimport com.google.protobuf.ByteString;\nimport com.google.protobuf.Descriptors;\nimport com.google.protobuf.Descriptors.FieldDescriptor;\nimport com.google.protobuf.Descriptors.FieldDescriptor.JavaType;\nimport com.google.protobuf.DynamicMessage;\nimport com.google.protobuf.MapEntry;\nimport com.google.protobuf.Message;\n\n\npublic class ProtobufToRowDataConverters implements Serializable {\n\n    /**\n     * Used for time conversions into SQL types.\n     */\n    private static final TimeZone LOCAL_TZ = TimeZone.getDefault();\n\n    private final boolean isDynamicMessage;\n\n    public ProtobufToRowDataConverters(boolean isDynamicMessage) {\n        this.isDynamicMessage = isDynamicMessage;\n    }\n\n\n    @FunctionalInterface\n    public interface ProtobufToRowDataConverter extends Serializable {\n        Object convert(Object object);\n    }\n\n    public ProtobufToRowDataConverter createRowDataConverterByLogicalType(\n            Descriptors.Descriptor descriptor\n            , RowType rowType) {\n        final FieldDescriptor[] fieldDescriptors =\n                descriptor.getFields().toArray(new FieldDescriptor[0]);\n\n        List<LogicalType> fieldLogicalTypes = rowType.getChildren();\n\n        final int length = fieldDescriptors.length;\n\n        final ProtobufToRowDataConverter[] runtimeConverters = new ProtobufToRowDataConverter[length];\n\n        for (int i = 0; i < length; i++) {\n            final FieldDescriptor fieldDescriptor = fieldDescriptors[i];\n            final LogicalType fieldLogicalType = fieldLogicalTypes.get(i);\n            runtimeConverters[i] = createConverterByLogicalType(fieldDescriptor, fieldLogicalType);\n        }\n\n        return (Object o) -> {\n            Message message = (Message) o;\n            final GenericRowData genericRowData = new GenericRowData(length);\n            for (int i = 0; i < length; i++) {\n                Object fieldO = message.getField(fieldDescriptors[i]);\n                genericRowData.setField(i, runtimeConverters[i].convert(fieldO));\n            }\n            return genericRowData;\n        };\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private ProtobufToRowDataConverter createConverterByLogicalType(Descriptors.GenericDescriptor genericDescriptor, LogicalType info) {\n        // we perform the conversion based on descriptor information but enriched with pre-computed\n        // type information where useful (i.e., for list)\n\n        if (info instanceof RowType) {\n            return createRowDataConverterByDescriptor((Descriptors.Descriptor) genericDescriptor, (RowType) info);\n        } else {\n\n            FieldDescriptor fieldDescriptor = ((FieldDescriptor) genericDescriptor);\n\n            switch (info.getTypeRoot()) {\n                case CHAR:\n                case VARCHAR:\n                case BOOLEAN:\n                case DECIMAL:\n                case TINYINT:\n                case SMALLINT:\n                case INTEGER:\n                case BIGINT:\n                case FLOAT:\n                case DOUBLE:\n                    if (info instanceof ArrayType) {\n                        // list\n                        LogicalType elementLogicalType = ((ArrayType) info).getElementType();\n\n                        return createArrayConverter(elementLogicalType);\n                    } else {\n                        return createObjectConverter(info);\n                    }\n                case ARRAY:\n                case MULTISET:\n                    // list\n                    LogicalType elementLogicalType = ((ArrayType) info).getElementType();\n\n                    if (fieldDescriptor.getJavaType() != JavaType.MESSAGE) {\n                        // list\n                        return createArrayConverter(elementLogicalType);\n                    }\n\n                    Descriptors.Descriptor elementDescriptor = fieldDescriptor.getMessageType();\n\n                    ProtobufToRowDataConverter elementConverter = this.createConverterByDescriptor(elementDescriptor, elementLogicalType);\n\n                    return (Object o) -> new GenericArrayData(((List) o)\n                            .stream()\n                            .map(elementConverter::convert)\n                            .toArray());\n                case ROW:\n                    // row\n                    return createRowDataConverterByDescriptor(((FieldDescriptor) genericDescriptor).getMessageType(), (RowType) info);\n                case MAP:\n                    // map\n                    final MapType mapTypeInfo = (MapType) info;\n\n                    // todo map's key only support string\n                    final ProtobufToRowDataConverter keyConverter = Object::toString;\n\n                    final FieldDescriptor keyFieldDescriptor =\n                            fieldDescriptor.getMessageType().getFields().get(0);\n\n                    final FieldDescriptor valueFieldDescriptor =\n                            fieldDescriptor.getMessageType().getFields().get(1);\n\n                    final LogicalType valueTypeInfo =\n                            mapTypeInfo.getValueType();\n\n                    final ProtobufToRowDataConverter valueConverter =\n                            createConverterByDescriptor(valueFieldDescriptor, valueTypeInfo);\n\n                    if (this.isDynamicMessage) {\n\n                        return (Object o) -> {\n                            final List<DynamicMessage> dynamicMessages = (List<DynamicMessage>) o;\n\n                            final Map<StringData, Object> convertedMap = new HashMap<>(dynamicMessages.size());\n\n                            dynamicMessages.forEach((DynamicMessage dynamicMessage) -> {\n                                convertedMap.put(\n                                        StringData.fromString((String) keyConverter.convert(dynamicMessage.getField(keyFieldDescriptor)))\n                                        , valueConverter.convert(dynamicMessage.getField(valueFieldDescriptor)));\n                            });\n\n                            return new GenericMapData(convertedMap);\n                        };\n\n                    } else {\n\n                        return (Object o) -> {\n                            final List<MapEntry> mapEntryList = (List<MapEntry>) o;\n                            final Map<StringData, Object> convertedMap = new HashMap<>(mapEntryList.size());\n                            mapEntryList.forEach((MapEntry message) -> {\n                                convertedMap.put(\n                                        StringData.fromString((String) keyConverter.convert(message.getKey()))\n                                        , valueConverter.convert(message.getValue()));\n                            });\n\n                            return new GenericMapData(convertedMap);\n                        };\n                    }\n                case BINARY:\n                case VARBINARY:\n                    return (Object o) -> {\n                        final byte[] bytes = ((ByteString) o).toByteArray();\n                        if (info instanceof DecimalType) {\n                            return convertToDecimal(bytes);\n                        }\n                        return bytes;\n                    };\n            }\n        }\n\n        throw new IllegalArgumentException(\"Unsupported Protobuf type '\" + genericDescriptor.getName() + \"'.\");\n\n    }\n\n    public ProtobufToRowDataConverter createRowDataConverterByDescriptor(\n            Descriptors.Descriptor descriptor\n            , RowType rowType) {\n        final FieldDescriptor[] fieldDescriptors =\n                descriptor.getFields().toArray(new FieldDescriptor[0]);\n//        final TypeInformation<?>[] fieldTypeInfos = rowTypeInfo.getFieldTypes();\n\n        List<LogicalType> fieldLogicalTypes = rowType.getChildren();\n\n        final int length = fieldDescriptors.length;\n\n        final ProtobufToRowDataConverter[] runtimeConverters = new ProtobufToRowDataConverter[length];\n\n        for (int i = 0; i < length; i++) {\n            final FieldDescriptor fieldDescriptor = fieldDescriptors[i];\n            final LogicalType fieldLogicalType = fieldLogicalTypes.get(i);\n            runtimeConverters[i] = createConverterByDescriptor(fieldDescriptor, fieldLogicalType);\n        }\n\n        return (Object o) -> {\n            Message message = (Message) o;\n            final GenericRowData genericRowData = new GenericRowData(length);\n            for (int i = 0; i < length; i++) {\n                Object fieldO = message.getField(fieldDescriptors[i]);\n                genericRowData.setField(i, runtimeConverters[i].convert(fieldO));\n            }\n            return genericRowData;\n        };\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private ProtobufToRowDataConverter createConverterByDescriptor(Descriptors.GenericDescriptor genericDescriptor, LogicalType info) {\n        // we perform the conversion based on descriptor information but enriched with pre-computed\n        // type information where useful (i.e., for list)\n\n        if (genericDescriptor instanceof Descriptors.Descriptor) {\n\n            return createRowDataConverterByDescriptor((Descriptors.Descriptor) genericDescriptor, (RowType) info);\n\n        } else if (genericDescriptor instanceof FieldDescriptor) {\n\n            FieldDescriptor fieldDescriptor = ((FieldDescriptor) genericDescriptor);\n\n            // field\n            switch (fieldDescriptor.getType()) {\n                case INT32:\n                case FIXED32:\n                case UINT32:\n                case SFIXED32:\n                case SINT32:\n                case INT64:\n                case UINT64:\n                case FIXED64:\n                case SFIXED64:\n                case SINT64:\n                case DOUBLE:\n                case FLOAT:\n                case BOOL:\n                case STRING:\n                    if (info instanceof ArrayType) {\n                        // list\n                        LogicalType elementLogicalType = ((ArrayType) info).getElementType();\n\n                        return createArrayConverter(elementLogicalType);\n                    } else {\n                        return createObjectConverter(info);\n                    }\n                case ENUM:\n                    if (info instanceof ArrayType) {\n\n                        // list\n                        return (Object o) -> new GenericArrayData(((List) o)\n                                .stream()\n                                .map(Object::toString)\n                                .toArray());\n                    } else {\n                        return Object::toString;\n                    }\n                case GROUP:\n                case MESSAGE:\n                    if (info instanceof ArrayType) {\n                        // list\n                        LogicalType elementLogicalType = ((ArrayType) info).getElementType();\n                        Descriptors.Descriptor elementDescriptor = fieldDescriptor.getMessageType();\n\n                        ProtobufToRowDataConverter elementConverter = this.createConverterByDescriptor(elementDescriptor, elementLogicalType);\n\n                        return (Object o) -> new GenericArrayData(((List) o)\n                                .stream()\n                                .map(elementConverter::convert)\n                                .toArray());\n\n                    } else if (info instanceof MapType) {\n                        // map\n                        final MapType mapTypeInfo = (MapType) info;\n\n                        // todo map's key only support string\n                        final ProtobufToRowDataConverter keyConverter = Object::toString;\n\n                        final FieldDescriptor keyFieldDescriptor =\n                                fieldDescriptor.getMessageType().getFields().get(0);\n\n                        final FieldDescriptor valueFieldDescriptor =\n                                fieldDescriptor.getMessageType().getFields().get(1);\n\n                        final LogicalType valueTypeInfo =\n                                mapTypeInfo.getValueType();\n\n                        final ProtobufToRowDataConverter valueConverter =\n                                createConverterByDescriptor(valueFieldDescriptor, valueTypeInfo);\n\n                        if (this.isDynamicMessage) {\n\n                            return (Object o) -> {\n                                final List<DynamicMessage> dynamicMessages = (List<DynamicMessage>) o;\n\n                                final Map<StringData, Object> convertedMap = new HashMap<>(dynamicMessages.size());\n\n                                dynamicMessages.forEach((DynamicMessage dynamicMessage) -> {\n                                    convertedMap.put(\n                                            StringData.fromString((String) keyConverter.convert(dynamicMessage.getField(keyFieldDescriptor)))\n                                            , valueConverter.convert(dynamicMessage.getField(valueFieldDescriptor)));\n                                });\n\n                                return new GenericMapData(convertedMap);\n                            };\n\n                        } else {\n\n                            return (Object o) -> {\n                                final List<MapEntry> mapEntryList = (List<MapEntry>) o;\n                                final Map<StringData, Object> convertedMap = new HashMap<>(mapEntryList.size());\n                                mapEntryList.forEach((MapEntry message) -> {\n                                    convertedMap.put(\n                                            StringData.fromString((String) keyConverter.convert(message.getKey()))\n                                            , valueConverter.convert(message.getValue()));\n                                });\n\n                                return new GenericMapData(convertedMap);\n                            };\n                        }\n                    } else if (info instanceof RowType) {\n                        // row\n                        return createRowDataConverterByDescriptor(((FieldDescriptor) genericDescriptor).getMessageType(), (RowType) info);\n                    }\n                    throw new IllegalStateException(\"Message expected but was: \");\n                case BYTES:\n\n                    return (Object o) -> {\n                        final byte[] bytes = ((ByteString) o).toByteArray();\n                        if (info instanceof DecimalType) {\n                            return convertToDecimal(bytes);\n                        }\n                        return bytes;\n                    };\n            }\n        }\n\n        throw new IllegalArgumentException(\"Unsupported Protobuf type '\" + genericDescriptor.getName() + \"'.\");\n\n    }\n\n    @SuppressWarnings(\"unchecked\")\n    private ProtobufToRowDataConverter createArrayConverter(LogicalType info) {\n\n        ProtobufToRowDataConverter elementConverter;\n\n        if (info instanceof DateType) {\n\n            elementConverter = this::convertToDate;\n\n        } else if (info instanceof TimeType) {\n\n            elementConverter = this::convertToTime;\n        } else if (info instanceof VarCharType) {\n            elementConverter = this::convertToString;\n        } else {\n\n            elementConverter = (Object fieldO) -> (fieldO);\n        }\n\n        return (Object o) -> new GenericArrayData(((List) o)\n                .stream()\n                .map(elementConverter::convert)\n                .toArray());\n    }\n\n    private StringData convertToString(Object filedO) {\n\n        return StringData.fromString((String) filedO);\n    }\n\n    private ProtobufToRowDataConverter createObjectConverter(LogicalType info) {\n        if (info instanceof DateType) {\n            return this::convertToDate;\n        } else if (info instanceof TimeType) {\n            return this::convertToTime;\n        } else if (info instanceof VarCharType) {\n            return this::convertToString;\n        } else {\n            return (Object o) -> o;\n        }\n    }\n\n    // --------------------------------------------------------------------------------------------\n\n    private BigDecimal convertToDecimal(byte[] bytes) {\n        return new BigDecimal(new BigInteger(bytes));\n    }\n\n    private Date convertToDate(Object object) {\n        final long millis;\n        if (object instanceof Integer) {\n            final Integer value = (Integer) object;\n            // adopted from Apache Calcite\n            final long t = (long) value * 86400000L;\n            millis = t - (long) LOCAL_TZ.getOffset(t);\n        } else {\n            // use 'provided' Joda time\n            final LocalDate value = (LocalDate) object;\n            millis = value.toDate().getTime();\n        }\n        return new Date(millis);\n    }\n\n    private Time convertToTime(Object object) {\n        final long millis;\n        if (object instanceof Integer) {\n            millis = (Integer) object;\n        } else {\n            // use 'provided' Joda time\n            final LocalTime value = (LocalTime) object;\n            millis = value.get(DateTimeFieldType.millisOfDay());\n        }\n        return new Time(millis - LOCAL_TZ.getOffset(millis));\n    }\n\n    private Timestamp convertToTimestamp(Object object) {\n        final long millis;\n        if (object instanceof Long) {\n            millis = (Long) object;\n        } else {\n            // use 'provided' Joda time\n            final DateTime value = (DateTime) object;\n            millis = value.toDate().getTime();\n        }\n        return new Timestamp(millis - LOCAL_TZ.getOffset(millis));\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/RowDataToProtobufConverters.java",
    "content": "package flink.examples.sql._05.format.formats.protobuf.rowdata;\n\n\npublic class RowDataToProtobufConverters {\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/utils/MoreRunnables.java",
    "content": "package flink.examples.sql._05.format.formats.utils;\n\npublic class MoreRunnables {\n\n\n    public static <EXCEPTION extends Throwable> void throwing(ThrowableRunable<EXCEPTION> throwableRunable) {\n        try {\n            throwableRunable.run();\n        } catch (Throwable e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/utils/MoreSuppliers.java",
    "content": "package flink.examples.sql._05.format.formats.utils;\n\n\npublic class MoreSuppliers {\n\n    private MoreSuppliers() {\n        throw new UnsupportedOperationException();\n    }\n\n    public static <OUT> OUT throwing(ThrowableSupplier<OUT, Throwable> throwableSupplier) {\n        try {\n            return throwableSupplier.get();\n        } catch (Throwable e) {\n            throw new RuntimeException(e);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/utils/ThrowableRunable.java",
    "content": "package flink.examples.sql._05.format.formats.utils;\n\n@FunctionalInterface\npublic interface ThrowableRunable<EXCEPTION extends Throwable> {\n\n    void run() throws EXCEPTION;\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_05/format/formats/utils/ThrowableSupplier.java",
    "content": "package flink.examples.sql._05.format.formats.utils;\n\n@FunctionalInterface\npublic interface ThrowableSupplier<OUT, EXCEPTION extends Throwable> {\n\n    OUT get() throws EXCEPTION;\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/CalciteTest.java",
    "content": "package flink.examples.sql._06.calcite;\n\nimport org.apache.calcite.sql.SqlNode;\nimport org.apache.calcite.sql.parser.SqlParseException;\nimport org.apache.calcite.sql.parser.SqlParser;\n\n\npublic class CalciteTest {\n\n    public static void main(String[] args) throws SqlParseException {\n        SqlParser parser = SqlParser.create(\"select c,d from source where a = '6'\", SqlParser.Config.DEFAULT);\n        SqlNode sqlNode = parser.parseStmt();\n\n        System.out.println();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/ParserTest.java",
    "content": "package flink.examples.sql._06.calcite;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.api.java.tuple.Tuple3;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._01.countdistincterror.udf.Mod_UDF;\nimport flink.examples.sql._01.countdistincterror.udf.StatusMapper_UDF;\n\n\npublic class ParserTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\n\n        env.setParallelism(10);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useOldPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Tuple3<String, Long, Long>> tuple3DataStream =\n                env.fromCollection(Arrays.asList(\n                        Tuple3.of(\"2\", 1L, 1627254000000L),\n                        Tuple3.of(\"2\", 1L, 1627218000000L + 5000L),\n                        Tuple3.of(\"2\", 101L, 1627218000000L + 6000L),\n                        Tuple3.of(\"2\", 201L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 7000L),\n                        Tuple3.of(\"2\", 301L, 1627218000000L + 86400000 + 7000L)))\n                        .assignTimestampsAndWatermarks(\n                                new BoundedOutOfOrdernessTimestampExtractor<Tuple3<String, Long, Long>>(Time.seconds(0L)) {\n                                    @Override\n                                    public long extractTimestamp(Tuple3<String, Long, Long> element) {\n                                        return element.f2;\n                                    }\n                                });\n\n        tEnv.registerFunction(\"mod\", new Mod_UDF());\n\n        tEnv.registerFunction(\"status_mapper\", new StatusMapper_UDF());\n\n        tEnv.createTemporaryView(\"source_db.source_table\", tuple3DataStream,\n                \"status, id, timestamp, rowtime.rowtime\");\n\n        String sql = \"SELECT\\n\"\n                + \"  sum(part_pv) as pv,\\n\"\n                + \"  window_start\\n\"\n                + \"FROM (\\n\"\n                + \"\\tSELECT\\n\"\n                + \"\\t  count(1) as part_pv,\\n\"\n                + \"\\t  cast(tumble_start(rowtime, INTERVAL '60' SECOND) as bigint) * 1000 as window_start\\n\"\n                + \"\\tFROM\\n\"\n                + \"\\t  source_db.source_table\\n\"\n                + \"\\tGROUP BY\\n\"\n                + \"\\t  tumble(rowtime, INTERVAL '60' SECOND)\\n\"\n                + \"\\t  , mod(id, 1024)\\n\"\n                + \")\\n\"\n                + \"GROUP BY\\n\"\n                + \"  window_start\";\n\n        Table result = tEnv.sqlQuery(sql);\n\n        tEnv.toRetractStream(result, Row.class).print();\n\n        env.execute();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/JavaccCodeGenTest.java",
    "content": "package flink.examples.sql._06.calcite.javacc;\n\n\n\npublic class JavaccCodeGenTest {\n\n    public static void main(String[] args) throws Exception {\n//       version();\n       javacc();\n    }\n\n    private static void version() throws Exception {\n        org.javacc.parser.Main.main(new String[] {\"-version\"});\n    }\n\n    private static void javacc() throws Exception {\n\n        String path = ClassLoader.getSystemResources(\"Simple1.jj\").nextElement().getPath();\n\n        org.javacc.parser.Main.main(new String[] {path});\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/Simple1Test.java",
    "content": "package flink.examples.sql._06.calcite.javacc;\n\n\nimport flink.examples.sql._06.calcite.javacc.generatedcode.Simple1;\n\n\npublic class Simple1Test {\n\n    public static void main(String[] args) throws Exception {\n        Simple1.main(args);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/ParseException.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Generated By:JavaCC: Do not edit this line. ParseException.java Version 7.0 */\n/* JavaCCOptions:KEEP_LINE_COLUMN=true */\n/**\n * This exception is thrown when parse errors are encountered.\n * You can explicitly create objects of this exception type by\n * calling the method generateParseException in the generated\n * parser.\n *\n * You can modify this class to customize your error reporting\n * mechanisms so long as you retain the public fields.\n */\npublic class ParseException extends Exception {\n\n  /**\n   * The version identifier for this Serializable class.\n   * Increment only if the <i>serialized</i> form of the\n   * class changes.\n   */\n  private static final long serialVersionUID = 1L;\n\n  /**\n   * The end of line string for this machine.\n   */\n  protected static String EOL = System.getProperty(\"line.separator\", \"\\n\");\n\n  /**\n   * This constructor is used by the method \"generateParseException\"\n   * in the generated parser.  Calling this constructor generates\n   * a new object of this type with the fields \"currentToken\",\n   * \"expectedTokenSequences\", and \"tokenImage\" set.\n   */\n  public ParseException(Token currentTokenVal,\n                        int[][] expectedTokenSequencesVal,\n                        String[] tokenImageVal\n                       )\n  {\n    super(initialise(currentTokenVal, expectedTokenSequencesVal, tokenImageVal));\n    currentToken = currentTokenVal;\n    expectedTokenSequences = expectedTokenSequencesVal;\n    tokenImage = tokenImageVal;\n  }\n\n  /**\n   * The following constructors are for use by you for whatever\n   * purpose you can think of.  Constructing the exception in this\n   * manner makes the exception behave in the normal way - i.e., as\n   * documented in the class \"Throwable\".  The fields \"errorToken\",\n   * \"expectedTokenSequences\", and \"tokenImage\" do not contain\n   * relevant information.  The JavaCC generated code does not use\n   * these constructors.\n   */\n\n  public ParseException() {\n    super();\n  }\n\n  /** Constructor with message. */\n  public ParseException(String message) {\n    super(message);\n  }\n\n\n  /**\n   * This is the last token that has been consumed successfully.  If\n   * this object has been created due to a parse error, the token\n   * following this token will (therefore) be the first error token.\n   */\n  public Token currentToken;\n\n  /**\n   * Each entry in this array is an array of integers.  Each array\n   * of integers represents a sequence of tokens (by their ordinal\n   * values) that is expected at this point of the parse.\n   */\n  public int[][] expectedTokenSequences;\n\n  /**\n   * This is a reference to the \"tokenImage\" array of the generated\n   * parser within which the parse error occurred.  This array is\n   * defined in the generated ...Constants interface.\n   */\n  public String[] tokenImage;\n\n  /**\n   * It uses \"currentToken\" and \"expectedTokenSequences\" to generate a parse\n   * error message and returns it.  If this object has been created\n   * due to a parse error, and you do not catch it (it gets thrown\n   * from the parser) the correct error message\n   * gets displayed.\n   */\n  private static String initialise(Token currentToken,\n                           int[][] expectedTokenSequences,\n                           String[] tokenImage) {\n\n    StringBuilder expected = new StringBuilder();\n    int maxSize = 0;\n    for (int i = 0; i < expectedTokenSequences.length; i++) {\n      if (maxSize < expectedTokenSequences[i].length) {\n        maxSize = expectedTokenSequences[i].length;\n      }\n      for (int j = 0; j < expectedTokenSequences[i].length; j++) {\n        expected.append(tokenImage[expectedTokenSequences[i][j]]).append(' ');\n      }\n      if (expectedTokenSequences[i][expectedTokenSequences[i].length - 1] != 0) {\n        expected.append(\"...\");\n      }\n      expected.append(EOL).append(\"    \");\n    }\n    String retval = \"Encountered \\\"\";\n    Token tok = currentToken.next;\n    for (int i = 0; i < maxSize; i++) {\n      if (i != 0) retval += \" \";\n      if (tok.kind == 0) {\n        retval += tokenImage[0];\n        break;\n      }\n      retval += \" \" + tokenImage[tok.kind];\n      retval += \" \\\"\";\n      retval += add_escapes(tok.image);\n      retval += \" \\\"\";\n      tok = tok.next;\n    }\n    if (currentToken.next != null) {\n      retval += \"\\\" at line \" + currentToken.next.beginLine + \", column \" + currentToken.next.beginColumn;\n    }\n    retval += \".\" + EOL;\n    \n    \n    if (expectedTokenSequences.length == 0) {\n        // Nothing to add here\n    } else {\n\t    if (expectedTokenSequences.length == 1) {\n\t      retval += \"Was expecting:\" + EOL + \"    \";\n\t    } else {\n\t      retval += \"Was expecting one of:\" + EOL + \"    \";\n\t    }\n\t    retval += expected.toString();\n    }\n    \n    return retval;\n  }\n\n\n  /**\n   * Used to convert raw characters to their escaped version\n   * when these raw version cannot be used as part of an ASCII\n   * string literal.\n   */\n  static String add_escapes(String str) {\n      StringBuilder retval = new StringBuilder();\n      char ch;\n      for (int i = 0; i < str.length(); i++) {\n        switch (str.charAt(i))\n        {\n           case '\\b':\n              retval.append(\"\\\\b\");\n              continue;\n           case '\\t':\n              retval.append(\"\\\\t\");\n              continue;\n           case '\\n':\n              retval.append(\"\\\\n\");\n              continue;\n           case '\\f':\n              retval.append(\"\\\\f\");\n              continue;\n           case '\\r':\n              retval.append(\"\\\\r\");\n              continue;\n           case '\\\"':\n              retval.append(\"\\\\\\\"\");\n              continue;\n           case '\\'':\n              retval.append(\"\\\\\\'\");\n              continue;\n           case '\\\\':\n              retval.append(\"\\\\\\\\\");\n              continue;\n           default:\n              if ((ch = str.charAt(i)) < 0x20 || ch > 0x7e) {\n                 String s = \"0000\" + Integer.toString(ch, 16);\n                 retval.append(\"\\\\u\" + s.substring(s.length() - 4, s.length()));\n              } else {\n                 retval.append(ch);\n              }\n              continue;\n        }\n      }\n      return retval.toString();\n   }\n\n}\n/* JavaCC - OriginalChecksum=de3ddfc6669ad4ae8d41fff7ccf6fbb7 (do not edit this line) */\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/Simple1.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Simple1.java */\n/* Generated By:JavaCC: Do not edit this line. Simple1.java */\n/** Simple brace matcher. */\npublic class Simple1 implements Simple1Constants {\n\n  /** Main entry point. */\n  public static void main(String args[]) throws ParseException {\n    Simple1 parser = new Simple1(System.in);\n    parser.Input();\n  }\n\n/** Root production. */\n  static final public void Input() throws ParseException {\n    MatchedBraces();\n    label_1:\n    while (true) {\n      switch ((jj_ntk==-1)?jj_ntk_f():jj_ntk) {\n      case 1:\n      case 2:{\n        ;\n        break;\n        }\n      default:\n        jj_la1[0] = jj_gen;\n        break label_1;\n      }\n      switch ((jj_ntk==-1)?jj_ntk_f():jj_ntk) {\n      case 1:{\n        jj_consume_token(1);\n        break;\n        }\n      case 2:{\n        jj_consume_token(2);\n        break;\n        }\n      default:\n        jj_la1[1] = jj_gen;\n        jj_consume_token(-1);\n        throw new ParseException();\n      }\n    }\n    jj_consume_token(0);\n}\n\n/** Brace matching production. */\n  static final public void MatchedBraces() throws ParseException {\n    jj_consume_token(3);\n    switch ((jj_ntk==-1)?jj_ntk_f():jj_ntk) {\n    case 3:{\n      MatchedBraces();\n      break;\n      }\n    default:\n      jj_la1[2] = jj_gen;\n      ;\n    }\n    jj_consume_token(4);\n}\n\n  static private boolean jj_initialized_once = false;\n  /** Generated Token Manager. */\n  static public Simple1TokenManager token_source;\n  static SimpleCharStream jj_input_stream;\n  /** Current token. */\n  static public Token token;\n  /** Next token. */\n  static public Token jj_nt;\n  static private int jj_ntk;\n  static private int jj_gen;\n  static final private int[] jj_la1 = new int[3];\n  static private int[] jj_la1_0;\n  static {\n\t   jj_la1_init_0();\n\t}\n\tprivate static void jj_la1_init_0() {\n\t   jj_la1_0 = new int[] {0x6,0x6,0x8,};\n\t}\n\n  /** Constructor with InputStream. */\n  public Simple1(java.io.InputStream stream) {\n\t  this(stream, null);\n  }\n  /** Constructor with InputStream and supplied encoding */\n  public Simple1(java.io.InputStream stream, String encoding) {\n\t if (jj_initialized_once) {\n\t   System.out.println(\"ERROR: Second call to constructor of static parser.  \");\n\t   System.out.println(\"\t   You must either use ReInit() or set the JavaCC option STATIC to false\");\n\t   System.out.println(\"\t   during parser generation.\");\n\t   throw new Error();\n\t }\n\t jj_initialized_once = true;\n\t try { jj_input_stream = new SimpleCharStream(stream, encoding, 1, 1); } catch(java.io.UnsupportedEncodingException e) { throw new RuntimeException(e); }\n\t token_source = new Simple1TokenManager(jj_input_stream);\n\t token = new Token();\n\t jj_ntk = -1;\n\t jj_gen = 0;\n\t for (int i = 0; i < 3; i++) jj_la1[i] = -1;\n  }\n\n  /** Reinitialise. */\n  static public void ReInit(java.io.InputStream stream) {\n\t  ReInit(stream, null);\n  }\n  /** Reinitialise. */\n  static public void ReInit(java.io.InputStream stream, String encoding) {\n\t try { jj_input_stream.ReInit(stream, encoding, 1, 1); } catch(java.io.UnsupportedEncodingException e) { throw new RuntimeException(e); }\n\t token_source.ReInit(jj_input_stream);\n\t token = new Token();\n\t jj_ntk = -1;\n\t jj_gen = 0;\n\t for (int i = 0; i < 3; i++) jj_la1[i] = -1;\n  }\n\n  /** Constructor. */\n  public Simple1(java.io.Reader stream) {\n\t if (jj_initialized_once) {\n\t   System.out.println(\"ERROR: Second call to constructor of static parser. \");\n\t   System.out.println(\"\t   You must either use ReInit() or set the JavaCC option STATIC to false\");\n\t   System.out.println(\"\t   during parser generation.\");\n\t   throw new Error();\n\t }\n\t jj_initialized_once = true;\n\t jj_input_stream = new SimpleCharStream(stream, 1, 1);\n\t token_source = new Simple1TokenManager(jj_input_stream);\n\t token = new Token();\n\t jj_ntk = -1;\n\t jj_gen = 0;\n\t for (int i = 0; i < 3; i++) jj_la1[i] = -1;\n  }\n\n  /** Reinitialise. */\n  static public void ReInit(java.io.Reader stream) {\n\tif (jj_input_stream == null) {\n\t   jj_input_stream = new SimpleCharStream(stream, 1, 1);\n\t} else {\n\t   jj_input_stream.ReInit(stream, 1, 1);\n\t}\n\tif (token_source == null) {\n token_source = new Simple1TokenManager(jj_input_stream);\n\t}\n\n\t token_source.ReInit(jj_input_stream);\n\t token = new Token();\n\t jj_ntk = -1;\n\t jj_gen = 0;\n\t for (int i = 0; i < 3; i++) jj_la1[i] = -1;\n  }\n\n  /** Constructor with generated Token Manager. */\n  public Simple1(Simple1TokenManager tm) {\n\t if (jj_initialized_once) {\n\t   System.out.println(\"ERROR: Second call to constructor of static parser. \");\n\t   System.out.println(\"\t   You must either use ReInit() or set the JavaCC option STATIC to false\");\n\t   System.out.println(\"\t   during parser generation.\");\n\t   throw new Error();\n\t }\n\t jj_initialized_once = true;\n\t token_source = tm;\n\t token = new Token();\n\t jj_ntk = -1;\n\t jj_gen = 0;\n\t for (int i = 0; i < 3; i++) jj_la1[i] = -1;\n  }\n\n  /** Reinitialise. */\n  public void ReInit(Simple1TokenManager tm) {\n\t token_source = tm;\n\t token = new Token();\n\t jj_ntk = -1;\n\t jj_gen = 0;\n\t for (int i = 0; i < 3; i++) jj_la1[i] = -1;\n  }\n\n  static private Token jj_consume_token(int kind) throws ParseException {\n\t Token oldToken;\n\t if ((oldToken = token).next != null) token = token.next;\n\t else token = token.next = token_source.getNextToken();\n\t jj_ntk = -1;\n\t if (token.kind == kind) {\n\t   jj_gen++;\n\t   return token;\n\t }\n\t token = oldToken;\n\t jj_kind = kind;\n\t throw generateParseException();\n  }\n\n\n/** Get the next Token. */\n  static final public Token getNextToken() {\n\t if (token.next != null) token = token.next;\n\t else token = token.next = token_source.getNextToken();\n\t jj_ntk = -1;\n\t jj_gen++;\n\t return token;\n  }\n\n/** Get the specific Token. */\n  static final public Token getToken(int index) {\n\t Token t = token;\n\t for (int i = 0; i < index; i++) {\n\t   if (t.next != null) t = t.next;\n\t   else t = t.next = token_source.getNextToken();\n\t }\n\t return t;\n  }\n\n  static private int jj_ntk_f() {\n\t if ((jj_nt=token.next) == null)\n\t   return (jj_ntk = (token.next=token_source.getNextToken()).kind);\n\t else\n\t   return (jj_ntk = jj_nt.kind);\n  }\n\n  static private java.util.List<int[]> jj_expentries = new java.util.ArrayList<int[]>();\n  static private int[] jj_expentry;\n  static private int jj_kind = -1;\n\n  /** Generate ParseException. */\n  static public ParseException generateParseException() {\n\t jj_expentries.clear();\n\t boolean[] la1tokens = new boolean[5];\n\t if (jj_kind >= 0) {\n\t   la1tokens[jj_kind] = true;\n\t   jj_kind = -1;\n\t }\n\t for (int i = 0; i < 3; i++) {\n\t   if (jj_la1[i] == jj_gen) {\n\t\t for (int j = 0; j < 32; j++) {\n\t\t   if ((jj_la1_0[i] & (1<<j)) != 0) {\n\t\t\t la1tokens[j] = true;\n\t\t   }\n\t\t }\n\t   }\n\t }\n\t for (int i = 0; i < 5; i++) {\n\t   if (la1tokens[i]) {\n\t\t jj_expentry = new int[1];\n\t\t jj_expentry[0] = i;\n\t\t jj_expentries.add(jj_expentry);\n\t   }\n\t }\n\t int[][] exptokseq = new int[jj_expentries.size()][];\n\t for (int i = 0; i < jj_expentries.size(); i++) {\n\t   exptokseq[i] = jj_expentries.get(i);\n\t }\n\t return new ParseException(token, exptokseq, tokenImage);\n  }\n\n  static private boolean trace_enabled;\n\n/** Trace enabled. */\n  static final public boolean trace_enabled() {\n\t return trace_enabled;\n  }\n\n  /** Enable tracing. */\n  static final public void enable_tracing() {\n  }\n\n  /** Disable tracing. */\n  static final public void disable_tracing() {\n  }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/Simple1Constants.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Generated By:JavaCC: Do not edit this line. Simple1Constants.java */\n\n/**\n * Token literal values and constants.\n * Generated by org.javacc.parser.OtherFilesGen#start()\n */\npublic interface Simple1Constants {\n\n  /** End of File. */\n  int EOF = 0;\n\n  /** Lexical state. */\n  int DEFAULT = 0;\n\n  /** Literal token values. */\n  String[] tokenImage = {\n    \"<EOF>\",\n    \"\\\"\\\\n\\\"\",\n    \"\\\"\\\\r\\\"\",\n    \"\\\"{\\\"\",\n    \"\\\"}\\\"\",\n  };\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/Simple1TokenManager.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Simple1TokenManager.java */\n/* Generated By:JavaCC: Do not edit this line. Simple1TokenManager.java */\n\n/** Token Manager. */\npublic class Simple1TokenManager implements Simple1Constants {\n\n  /** Debug output. */\n  public static  java.io.PrintStream debugStream = System.out;\n  /** Set debug output. */\n  public static  void setDebugStream(java.io.PrintStream ds) { debugStream = ds; }\nstatic private int jjStopAtPos(int pos, int kind)\n{\n   jjmatchedKind = kind;\n   jjmatchedPos = pos;\n   return pos + 1;\n}\nstatic private int jjMoveStringLiteralDfa0_0(){\n   switch(curChar)\n   {\n      case 10:\n         return jjStopAtPos(0, 1);\n      case 13:\n         return jjStopAtPos(0, 2);\n      case 123:\n         return jjStopAtPos(0, 3);\n      case 125:\n         return jjStopAtPos(0, 4);\n      default :\n         return 1;\n   }\n}\n\n/** Token literal values. */\npublic static final String[] jjstrLiteralImages = {\n\"\", \"\\12\", \"\\15\", \"\\173\", \"\\175\", };\nstatic protected Token jjFillToken()\n{\n   final Token t;\n   final String curTokenImage;\n   final int beginLine;\n   final int endLine;\n   final int beginColumn;\n   final int endColumn;\n   String im = jjstrLiteralImages[jjmatchedKind];\n   curTokenImage = (im == null) ? input_stream.GetImage() : im;\n   beginLine = input_stream.getBeginLine();\n   beginColumn = input_stream.getBeginColumn();\n   endLine = input_stream.getEndLine();\n   endColumn = input_stream.getEndColumn();\n   t = Token.newToken(jjmatchedKind, curTokenImage);\n\n   t.beginLine = beginLine;\n   t.endLine = endLine;\n   t.beginColumn = beginColumn;\n   t.endColumn = endColumn;\n\n   return t;\n}\nstatic final int[] jjnextStates = {0\n};\n\nstatic int curLexState = 0;\nstatic int defaultLexState = 0;\nstatic int jjnewStateCnt;\nstatic int jjround;\nstatic int jjmatchedPos;\nstatic int jjmatchedKind;\n\n/** Get the next Token. */\npublic static Token getNextToken() \n{\n  Token matchedToken;\n  int curPos = 0;\n\n  EOFLoop :\n  for (;;)\n  {\n   try\n   {\n      curChar = input_stream.BeginToken();\n   }\n   catch(Exception e)\n   {\n      jjmatchedKind = 0;\n      jjmatchedPos = -1;\n      matchedToken = jjFillToken();\n      return matchedToken;\n   }\n\n   jjmatchedKind = 0x7fffffff;\n   jjmatchedPos = 0;\n   curPos = jjMoveStringLiteralDfa0_0();\n   if (jjmatchedKind != 0x7fffffff)\n   {\n      if (jjmatchedPos + 1 < curPos)\n         input_stream.backup(curPos - jjmatchedPos - 1);\n         matchedToken = jjFillToken();\n         return matchedToken;\n   }\n   int error_line = input_stream.getEndLine();\n   int error_column = input_stream.getEndColumn();\n   String error_after = null;\n   boolean EOFSeen = false;\n   try { input_stream.readChar(); input_stream.backup(1); }\n   catch (java.io.IOException e1) {\n      EOFSeen = true;\n      error_after = curPos <= 1 ? \"\" : input_stream.GetImage();\n      if (curChar == '\\n' || curChar == '\\r') {\n         error_line++;\n         error_column = 0;\n      }\n      else\n         error_column++;\n   }\n   if (!EOFSeen) {\n      input_stream.backup(1);\n      error_after = curPos <= 1 ? \"\" : input_stream.GetImage();\n   }\n   throw new TokenMgrError(EOFSeen, curLexState, error_line, error_column, error_after, curChar, TokenMgrError.LEXICAL_ERROR);\n  }\n}\n\nstatic void SkipLexicalActions(Token matchedToken)\n{\n   switch(jjmatchedKind)\n   {\n      default :\n         break;\n   }\n}\nstatic void MoreLexicalActions()\n{\n   jjimageLen += (lengthOfMatch = jjmatchedPos + 1);\n   switch(jjmatchedKind)\n   {\n      default :\n         break;\n   }\n}\nstatic void TokenLexicalActions(Token matchedToken)\n{\n   switch(jjmatchedKind)\n   {\n      default :\n         break;\n   }\n}\nstatic private void jjCheckNAdd(int state)\n{\n   if (jjrounds[state] != jjround)\n   {\n      jjstateSet[jjnewStateCnt++] = state;\n      jjrounds[state] = jjround;\n   }\n}\nstatic private void jjAddStates(int start, int end)\n{\n   do {\n      jjstateSet[jjnewStateCnt++] = jjnextStates[start];\n   } while (start++ != end);\n}\nstatic private void jjCheckNAddTwoStates(int state1, int state2)\n{\n   jjCheckNAdd(state1);\n   jjCheckNAdd(state2);\n}\n\n    /** Constructor. */\n    public Simple1TokenManager(SimpleCharStream stream){\n\n      if (input_stream != null)\n        throw new TokenMgrError(\"ERROR: Second call to constructor of static lexer. You must use ReInit() to initialize the static variables.\", TokenMgrError.STATIC_LEXER_ERROR);\n\n    input_stream = stream;\n  }\n\n  /** Constructor. */\n  public Simple1TokenManager (SimpleCharStream stream, int lexState){\n    ReInit(stream);\n    SwitchTo(lexState);\n  }\n\n  /** Reinitialise parser. */\n  \n  static public void ReInit(SimpleCharStream stream)\n  {\n\n\n    jjmatchedPos =\n    jjnewStateCnt =\n    0;\n    curLexState = defaultLexState;\n    input_stream = stream;\n    ReInitRounds();\n  }\n\n  static private void ReInitRounds()\n  {\n    int i;\n    jjround = 0x80000001;\n    for (i = 0; i-- > 0;)\n      jjrounds[i] = 0x80000000;\n  }\n\n  /** Reinitialise parser. */\n  static public void ReInit(SimpleCharStream stream, int lexState)\n  \n  {\n    ReInit(stream);\n    SwitchTo(lexState);\n  }\n\n  /** Switch to specified lex state. */\n  public static void SwitchTo(int lexState)\n  {\n    if (lexState >= 1 || lexState < 0)\n      throw new TokenMgrError(\"Error: Ignoring invalid lexical state : \" + lexState + \". State unchanged.\", TokenMgrError.INVALID_LEXICAL_STATE);\n    else\n      curLexState = lexState;\n  }\n\n\n/** Lexer state names. */\npublic static final String[] lexStateNames = {\n   \"DEFAULT\",\n};\n\n/** Lex State array. */\npublic static final int[] jjnewLexState = {\n   -1, -1, -1, -1, -1, \n};\nstatic final long[] jjtoToken = {\n   0x1fL, \n};\nstatic final long[] jjtoSkip = {\n   0x0L, \n};\nstatic final long[] jjtoSpecial = {\n   0x0L, \n};\nstatic final long[] jjtoMore = {\n   0x0L, \n};\n    static protected SimpleCharStream  input_stream;\n\n    static private final int[] jjrounds = new int[0];\n    static private final int[] jjstateSet = new int[2 * 0];\n    private static final StringBuilder jjimage = new StringBuilder();\n    private static StringBuilder image = jjimage;\n    private static int jjimageLen;\n    private static int lengthOfMatch;\n    static protected int curChar;\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/SimpleCharStream.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Generated By:JavaCC: Do not edit this line. SimpleCharStream.java Version 7.0 */\n/* JavaCCOptions:STATIC=true,SUPPORT_CLASS_VISIBILITY_PUBLIC=true */\n/**\n * An implementation of interface CharStream, where the stream is assumed to\n * contain only ASCII characters (without unicode processing).\n */\n\npublic class SimpleCharStream\n{\n/** Whether parser is static. */\n  public static final boolean staticFlag = true;\n  static int bufsize;\n  static int available;\n  static int tokenBegin;\n/** Position in buffer. */\n  static public int bufpos = -1;\n  static protected int bufline[];\n  static protected int bufcolumn[];\n\n  static protected int column = 0;\n  static protected int line = 1;\n\n  static protected boolean prevCharIsCR = false;\n  static protected boolean prevCharIsLF = false;\n\n  static protected java.io.Reader inputStream;\n\n  static protected char[] buffer;\n  static protected int maxNextCharInd = 0;\n  static protected int inBuf = 0;\n  static protected int tabSize = 1;\n  static protected boolean trackLineColumn = true;\n\n  static public void setTabSize(int i) { tabSize = i; }\n  static public int getTabSize() { return tabSize; }\n\n\n\n  static protected void ExpandBuff(boolean wrapAround)\n  {\n    char[] newbuffer = new char[bufsize + 2048];\n    int newbufline[] = new int[bufsize + 2048];\n    int newbufcolumn[] = new int[bufsize + 2048];\n\n    try\n    {\n      if (wrapAround)\n      {\n        System.arraycopy(buffer, tokenBegin, newbuffer, 0, bufsize - tokenBegin);\n        System.arraycopy(buffer, 0, newbuffer, bufsize - tokenBegin, bufpos);\n        buffer = newbuffer;\n\n        System.arraycopy(bufline, tokenBegin, newbufline, 0, bufsize - tokenBegin);\n        System.arraycopy(bufline, 0, newbufline, bufsize - tokenBegin, bufpos);\n        bufline = newbufline;\n\n        System.arraycopy(bufcolumn, tokenBegin, newbufcolumn, 0, bufsize - tokenBegin);\n        System.arraycopy(bufcolumn, 0, newbufcolumn, bufsize - tokenBegin, bufpos);\n        bufcolumn = newbufcolumn;\n\n        maxNextCharInd = (bufpos += (bufsize - tokenBegin));\n      }\n      else\n      {\n        System.arraycopy(buffer, tokenBegin, newbuffer, 0, bufsize - tokenBegin);\n        buffer = newbuffer;\n\n        System.arraycopy(bufline, tokenBegin, newbufline, 0, bufsize - tokenBegin);\n        bufline = newbufline;\n\n        System.arraycopy(bufcolumn, tokenBegin, newbufcolumn, 0, bufsize - tokenBegin);\n        bufcolumn = newbufcolumn;\n\n        maxNextCharInd = (bufpos -= tokenBegin);\n      }\n    }\n    catch (Throwable t)\n    {\n      throw new Error(t.getMessage());\n    }\n\n\n    bufsize += 2048;\n    available = bufsize;\n    tokenBegin = 0;\n  }\n\n  static protected void FillBuff() throws java.io.IOException\n  {\n    if (maxNextCharInd == available)\n    {\n      if (available == bufsize)\n      {\n        if (tokenBegin > 2048)\n        {\n          bufpos = maxNextCharInd = 0;\n          available = tokenBegin;\n        }\n        else if (tokenBegin < 0)\n          bufpos = maxNextCharInd = 0;\n        else\n          ExpandBuff(false);\n      }\n      else if (available > tokenBegin)\n        available = bufsize;\n      else if ((tokenBegin - available) < 2048)\n        ExpandBuff(true);\n      else\n        available = tokenBegin;\n    }\n\n    int i;\n    try {\n      if ((i = inputStream.read(buffer, maxNextCharInd, available - maxNextCharInd)) == -1)\n      {\n        inputStream.close();\n        throw new java.io.IOException();\n      }\n      else\n        maxNextCharInd += i;\n      return;\n    }\n    catch(java.io.IOException e) {\n      --bufpos;\n      backup(0);\n      if (tokenBegin == -1)\n        tokenBegin = bufpos;\n      throw e;\n    }\n  }\n\n/** Start. */\n  static public char BeginToken() throws java.io.IOException\n  {\n    tokenBegin = -1;\n    char c = readChar();\n    tokenBegin = bufpos;\n\n    return c;\n  }\n\n  static protected void UpdateLineColumn(char c)\n  {\n    column++;\n\n    if (prevCharIsLF)\n    {\n      prevCharIsLF = false;\n      line += (column = 1);\n    }\n    else if (prevCharIsCR)\n    {\n      prevCharIsCR = false;\n      if (c == '\\n')\n      {\n        prevCharIsLF = true;\n      }\n      else\n        line += (column = 1);\n    }\n\n    switch (c)\n    {\n      case '\\r' :\n        prevCharIsCR = true;\n        break;\n      case '\\n' :\n        prevCharIsLF = true;\n        break;\n      case '\\t' :\n        column--;\n        column += (tabSize - (column % tabSize));\n        break;\n      default :\n        break;\n    }\n\n    bufline[bufpos] = line;\n    bufcolumn[bufpos] = column;\n  }\n\n/** Read a character. */\n  static public char readChar() throws java.io.IOException\n  {\n    if (inBuf > 0)\n    {\n      --inBuf;\n\n      if (++bufpos == bufsize)\n        bufpos = 0;\n\n      return buffer[bufpos];\n    }\n\n    if (++bufpos >= maxNextCharInd)\n      FillBuff();\n\n    char c = buffer[bufpos];\n\n    UpdateLineColumn(c);\n    return c;\n  }\n\n  @Deprecated\n  /**\n   * @deprecated\n   * @see #getEndColumn\n   */\n\n  static public int getColumn() {\n    return bufcolumn[bufpos];\n  }\n\n  @Deprecated\n  /**\n   * @deprecated\n   * @see #getEndLine\n   */\n\n  static public int getLine() {\n    return bufline[bufpos];\n  }\n\n  /** Get token end column number. */\n  static public int getEndColumn() {\n    return bufcolumn[bufpos];\n  }\n\n  /** Get token end line number. */\n  static public int getEndLine() {\n     return bufline[bufpos];\n  }\n\n  /** Get token beginning column number. */\n  static public int getBeginColumn() {\n    return bufcolumn[tokenBegin];\n  }\n\n  /** Get token beginning line number. */\n  static public int getBeginLine() {\n    return bufline[tokenBegin];\n  }\n\n/** Backup a number of characters. */\n  static public void backup(int amount) {\n\n    inBuf += amount;\n    if ((bufpos -= amount) < 0)\n      bufpos += bufsize;\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.Reader dstream, int startline,\n  int startcolumn, int buffersize)\n  {\n    if (inputStream != null)\n      throw new Error(\"\\n   ERROR: Second call to the constructor of a static SimpleCharStream.\\n\" +\n      \"       You must either use ReInit() or set the JavaCC option STATIC to false\\n\" +\n      \"       during the generation of this class.\");\n    inputStream = dstream;\n    line = startline;\n    column = startcolumn - 1;\n\n    available = bufsize = buffersize;\n    buffer = new char[buffersize];\n    bufline = new int[buffersize];\n    bufcolumn = new int[buffersize];\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.Reader dstream, int startline,\n                          int startcolumn)\n  {\n    this(dstream, startline, startcolumn, 4096);\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.Reader dstream)\n  {\n    this(dstream, 1, 1, 4096);\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.Reader dstream, int startline,\n  int startcolumn, int buffersize)\n  {\n    inputStream = dstream;\n    line = startline;\n    column = startcolumn - 1;\n\n    if (buffer == null || buffersize != buffer.length)\n    {\n      available = bufsize = buffersize;\n      buffer = new char[buffersize];\n      bufline = new int[buffersize];\n      bufcolumn = new int[buffersize];\n    }\n    prevCharIsLF = prevCharIsCR = false;\n    tokenBegin = inBuf = maxNextCharInd = 0;\n    bufpos = -1;\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.Reader dstream, int startline,\n                     int startcolumn)\n  {\n    ReInit(dstream, startline, startcolumn, 4096);\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.Reader dstream)\n  {\n    ReInit(dstream, 1, 1, 4096);\n  }\n  /** Constructor. */\n  public SimpleCharStream(java.io.InputStream dstream, String encoding, int startline,\n  int startcolumn, int buffersize) throws java.io.UnsupportedEncodingException\n  {\n    this(encoding == null ? new java.io.InputStreamReader(dstream) : new java.io.InputStreamReader(dstream, encoding), startline, startcolumn, buffersize);\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.InputStream dstream, int startline,\n  int startcolumn, int buffersize)\n  {\n    this(new java.io.InputStreamReader(dstream), startline, startcolumn, buffersize);\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.InputStream dstream, String encoding, int startline,\n                          int startcolumn) throws java.io.UnsupportedEncodingException\n  {\n    this(dstream, encoding, startline, startcolumn, 4096);\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.InputStream dstream, int startline,\n                          int startcolumn)\n  {\n    this(dstream, startline, startcolumn, 4096);\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.InputStream dstream, String encoding) throws java.io.UnsupportedEncodingException\n  {\n    this(dstream, encoding, 1, 1, 4096);\n  }\n\n  /** Constructor. */\n  public SimpleCharStream(java.io.InputStream dstream)\n  {\n    this(dstream, 1, 1, 4096);\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.InputStream dstream, String encoding, int startline,\n                          int startcolumn, int buffersize) throws java.io.UnsupportedEncodingException\n  {\n    ReInit(encoding == null ? new java.io.InputStreamReader(dstream) : new java.io.InputStreamReader(dstream, encoding), startline, startcolumn, buffersize);\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.InputStream dstream, int startline,\n                          int startcolumn, int buffersize)\n  {\n    ReInit(new java.io.InputStreamReader(dstream), startline, startcolumn, buffersize);\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.InputStream dstream, String encoding) throws java.io.UnsupportedEncodingException\n  {\n    ReInit(dstream, encoding, 1, 1, 4096);\n  }\n\n  /** Reinitialise. */\n  public void ReInit(java.io.InputStream dstream)\n  {\n    ReInit(dstream, 1, 1, 4096);\n  }\n  /** Reinitialise. */\n  public void ReInit(java.io.InputStream dstream, String encoding, int startline,\n                     int startcolumn) throws java.io.UnsupportedEncodingException\n  {\n    ReInit(dstream, encoding, startline, startcolumn, 4096);\n  }\n  /** Reinitialise. */\n  public void ReInit(java.io.InputStream dstream, int startline,\n                     int startcolumn)\n  {\n    ReInit(dstream, startline, startcolumn, 4096);\n  }\n  /** Get token literal value. */\n  static public String GetImage()\n  {\n    if (bufpos >= tokenBegin)\n      return new String(buffer, tokenBegin, bufpos - tokenBegin + 1);\n    else\n      return new String(buffer, tokenBegin, bufsize - tokenBegin) +\n                            new String(buffer, 0, bufpos + 1);\n  }\n\n  /** Get the suffix. */\n  static public char[] GetSuffix(int len)\n  {\n    char[] ret = new char[len];\n\n    if ((bufpos + 1) >= len)\n      System.arraycopy(buffer, bufpos - len + 1, ret, 0, len);\n    else\n    {\n      System.arraycopy(buffer, bufsize - (len - bufpos - 1), ret, 0,\n                                                        len - bufpos - 1);\n      System.arraycopy(buffer, 0, ret, len - bufpos - 1, bufpos + 1);\n    }\n\n    return ret;\n  }\n\n  /** Reset buffer when finished. */\n  static public void Done()\n  {\n    buffer = null;\n    bufline = null;\n    bufcolumn = null;\n  }\n\n  /**\n   * Method to adjust line and column numbers for the start of a token.\n   */\n  static public void adjustBeginLineColumn(int newLine, int newCol)\n  {\n    int start = tokenBegin;\n    int len;\n\n    if (bufpos >= tokenBegin)\n    {\n      len = bufpos - tokenBegin + inBuf + 1;\n    }\n    else\n    {\n      len = bufsize - tokenBegin + bufpos + 1 + inBuf;\n    }\n\n    int i = 0, j = 0, k = 0;\n    int nextColDiff = 0, columnDiff = 0;\n\n    while (i < len && bufline[j = start % bufsize] == bufline[k = ++start % bufsize])\n    {\n      bufline[j] = newLine;\n      nextColDiff = columnDiff + bufcolumn[k] - bufcolumn[j];\n      bufcolumn[j] = newCol + columnDiff;\n      columnDiff = nextColDiff;\n      i++;\n    }\n\n    if (i < len)\n    {\n      bufline[j] = newLine++;\n      bufcolumn[j] = newCol + columnDiff;\n\n      while (i++ < len)\n      {\n        if (bufline[j = start % bufsize] != bufline[++start % bufsize])\n          bufline[j] = newLine++;\n        else\n          bufline[j] = newLine;\n      }\n    }\n\n    line = bufline[j];\n    column = bufcolumn[j];\n  }\n  static boolean getTrackLineColumn() { return trackLineColumn; }\n  static void setTrackLineColumn(boolean tlc) { trackLineColumn = tlc; }\n}\n/* JavaCC - OriginalChecksum=052d2c8783a7a693ccde91d90feb1d3b (do not edit this line) */\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/Token.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Generated By:JavaCC: Do not edit this line. Token.java Version 7.0 */\n/* JavaCCOptions:TOKEN_EXTENDS=,KEEP_LINE_COLUMN=true,SUPPORT_CLASS_VISIBILITY_PUBLIC=true */\n/**\n * Describes the input token stream.\n */\n\npublic class Token implements java.io.Serializable {\n\n  /**\n   * The version identifier for this Serializable class.\n   * Increment only if the <i>serialized</i> form of the\n   * class changes.\n   */\n  private static final long serialVersionUID = 1L;\n\n  /**\n   * An integer that describes the kind of this token.  This numbering\n   * system is determined by JavaCCParser, and a table of these numbers is\n   * stored in the file ...Constants.java.\n   */\n  public int kind;\n\n  /** The line number of the first character of this Token. */\n  public int beginLine;\n  /** The column number of the first character of this Token. */\n  public int beginColumn;\n  /** The line number of the last character of this Token. */\n  public int endLine;\n  /** The column number of the last character of this Token. */\n  public int endColumn;\n\n  /**\n   * The string image of the token.\n   */\n  public String image;\n\n  /**\n   * A reference to the next regular (non-special) token from the input\n   * stream.  If this is the last token from the input stream, or if the\n   * token manager has not read tokens beyond this one, this field is\n   * set to null.  This is true only if this token is also a regular\n   * token.  Otherwise, see below for a description of the contents of\n   * this field.\n   */\n  public Token next;\n\n  /**\n   * This field is used to access special tokens that occur prior to this\n   * token, but after the immediately preceding regular (non-special) token.\n   * If there are no such special tokens, this field is set to null.\n   * When there are more than one such special token, this field refers\n   * to the last of these special tokens, which in turn refers to the next\n   * previous special token through its specialToken field, and so on\n   * until the first special token (whose specialToken field is null).\n   * The next fields of special tokens refer to other special tokens that\n   * immediately follow it (without an intervening regular token).  If there\n   * is no such token, this field is null.\n   */\n  public Token specialToken;\n\n  /**\n   * An optional attribute value of the Token.\n   * Tokens which are not used as syntactic sugar will often contain\n   * meaningful values that will be used later on by the compiler or\n   * interpreter. This attribute value is often different from the image.\n   * Any subclass of Token that actually wants to return a non-null value can\n   * override this method as appropriate.\n   */\n  public Object getValue() {\n    return null;\n  }\n\n  /**\n   * No-argument constructor\n   */\n  public Token() {}\n\n  /**\n   * Constructs a new token for the specified Image.\n   */\n  public Token(int kind)\n  {\n    this(kind, null);\n  }\n\n  /**\n   * Constructs a new token for the specified Image and Kind.\n   */\n  public Token(int kind, String image)\n  {\n    this.kind = kind;\n    this.image = image;\n  }\n\n  /**\n   * Returns the image.\n   */\n  @Override\n  public String toString()\n  {\n    return image;\n  }\n\n  /**\n   * Returns a new Token object, by default. However, if you want, you\n   * can create and return subclass objects based on the value of ofKind.\n   * Simply add the cases to the switch for all those special cases.\n   * For example, if you have a subclass of Token called IDToken that\n   * you want to create if ofKind is ID, simply add something like :\n   *\n   *    case MyParserConstants.ID : return new IDToken(ofKind, image);\n   *\n   * to the following switch statement. Then you can cast matchedToken\n   * variable to the appropriate type and use sit in your lexical actions.\n   */\n  public static Token newToken(int ofKind, String image)\n  {\n    switch(ofKind)\n    {\n      default : return new Token(ofKind, image);\n    }\n  }\n\n  public static Token newToken(int ofKind)\n  {\n    return newToken(ofKind, null);\n  }\n\n}\n/* JavaCC - OriginalChecksum=093f73b266edc0ed6a424fcd3b5446d1 (do not edit this line) */\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_06/calcite/javacc/generatedcode/TokenMgrError.java",
    "content": "package flink.examples.sql._06.calcite.javacc.generatedcode;/* Generated By:JavaCC: Do not edit this line. TokenMgrError.java Version 7.0 */\n/* JavaCCOptions: */\n/** Token Manager Error. */\npublic class TokenMgrError extends Error\n{\n\n  /**\n   * The version identifier for this Serializable class.\n   * Increment only if the <i>serialized</i> form of the\n   * class changes.\n   */\n  private static final long serialVersionUID = 1L;\n\n  /*\n   * Ordinals for various reasons why an Error of this type can be thrown.\n   */\n\n  /**\n   * Lexical error occurred.\n   */\n  public static final int LEXICAL_ERROR = 0;\n\n  /**\n   * An attempt was made to create a second instance of a static token manager.\n   */\n  public static final int STATIC_LEXER_ERROR = 1;\n\n  /**\n   * Tried to change to an invalid lexical state.\n   */\n  public static final int INVALID_LEXICAL_STATE = 2;\n\n  /**\n   * Detected (and bailed out of) an infinite loop in the token manager.\n   */\n  public static final int LOOP_DETECTED = 3;\n\n  /**\n   * Indicates the reason why the exception is thrown. It will have\n   * one of the above 4 values.\n   */\n  int errorCode;\n\n  /**\n   * Replaces unprintable characters by their escaped (or unicode escaped)\n   * equivalents in the given string\n   */\n  protected static final String addEscapes(String str) {\n    StringBuilder retval = new StringBuilder();\n    char ch;\n    for (int i = 0; i < str.length(); i++) {\n      switch (str.charAt(i))\n      {\n        case '\\b':\n          retval.append(\"\\\\b\");\n          continue;\n        case '\\t':\n          retval.append(\"\\\\t\");\n          continue;\n        case '\\n':\n          retval.append(\"\\\\n\");\n          continue;\n        case '\\f':\n          retval.append(\"\\\\f\");\n          continue;\n        case '\\r':\n          retval.append(\"\\\\r\");\n          continue;\n        case '\\\"':\n          retval.append(\"\\\\\\\"\");\n          continue;\n        case '\\'':\n          retval.append(\"\\\\\\'\");\n          continue;\n        case '\\\\':\n          retval.append(\"\\\\\\\\\");\n          continue;\n        default:\n          if ((ch = str.charAt(i)) < 0x20 || ch > 0x7e) {\n            String s = \"0000\" + Integer.toString(ch, 16);\n            retval.append(\"\\\\u\" + s.substring(s.length() - 4, s.length()));\n          } else {\n            retval.append(ch);\n          }\n          continue;\n      }\n    }\n    return retval.toString();\n  }\n\n  /**\n   * Returns a detailed message for the Error when it is thrown by the\n   * token manager to indicate a lexical error.\n   * Parameters :\n   *    EOFSeen     : indicates if EOF caused the lexical error\n   *    curLexState : lexical state in which this error occurred\n   *    errorLine   : line number when the error occurred\n   *    errorColumn : column number when the error occurred\n   *    errorAfter  : prefix that was seen before this error occurred\n   *    curchar     : the offending character\n   * Note: You can customize the lexical error message by modifying this method.\n   */\n  protected static String LexicalErr(boolean EOFSeen, int lexState, int errorLine, int errorColumn, String errorAfter, int curChar) {\n    char curChar1 = (char)curChar;\n    return(\"Lexical error at line \" +\n          errorLine + \", column \" +\n          errorColumn + \".  Encountered: \" +\n          (EOFSeen ? \"<EOF> \" : (\"\\\"\" + addEscapes(String.valueOf(curChar1)) + \"\\\"\") + \" (\" + curChar + \"), \") +\n          \"after : \\\"\" + addEscapes(errorAfter) + \"\\\"\");\n  }\n\n  /**\n   * You can also modify the body of this method to customize your error messages.\n   * For example, cases like LOOP_DETECTED and INVALID_LEXICAL_STATE are not\n   * of end-users concern, so you can return something like :\n   *\n   *     \"Internal Error : Please file a bug report .... \"\n   *\n   * from this method for such cases in the release version of your parser.\n   */\n  @Override\n  public String getMessage() {\n    return super.getMessage();\n  }\n\n  /*\n   * Constructors of various flavors follow.\n   */\n\n  /** No arg constructor. */\n  public TokenMgrError() {\n  }\n\n  /** Constructor with message and reason. */\n  public TokenMgrError(String message, int reason) {\n    super(message);\n    errorCode = reason;\n  }\n\n  /** Full Constructor. */\n  public TokenMgrError(boolean EOFSeen, int lexState, int errorLine, int errorColumn, String errorAfter, int curChar, int reason) {\n    this(LexicalErr(EOFSeen, lexState, errorLine, errorColumn, errorAfter, curChar), reason);\n  }\n}\n/* JavaCC - OriginalChecksum=9e201c978d59ab6f122a52837e6310b1 (do not edit this line) */\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/SelectWhereHiveDialect.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class SelectWhereHiveDialect {\n\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        // TODO 没有 hive catalog 会 fallback 回 default parser\n        flinkEnv.streamTEnv().getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        DataStream<Row> r = flinkEnv.env().addSource(new UserDefinedSource());\n\n        flinkEnv.streamTEnv().createTemporaryView(\"source_table\", r);\n\n        String selectWhereSql = \"select * from source_table\";\n\n        Table resultTable = flinkEnv.streamTEnv().sqlQuery(selectWhereSql);\n\n        flinkEnv.streamTEnv().toRetractStream(resultTable, Row.class).print();\n\n        flinkEnv.env().execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(new TypeInformation[]{\n                    TypeInformation.of(String.class)\n                    , TypeInformation.of(String.class)\n                    , TypeInformation.of(Long.class)\n            }, new String[] {\"a\", \"b\", \"c\"});\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/SelectWhereTest.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class SelectWhereTest {\n\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n//        DataStream<Row> r = env.addSource(new FlinkKafkaConsumer<Row>());\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r\n                , Schema\n                        .newBuilder()\n                        .column(\"f0\", \"string\")\n                        .column(\"f1\", \"string\")\n                        .column(\"f2\", \"bigint\")\n                        .columnByExpression(\"proctime\", \"PROCTIME()\")\n                        .build());\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable);\n\n        String selectWhereSql = \"select f0 from source_table where f1 = 'b'\";\n\n        Table resultTable = tEnv.sqlQuery(selectWhereSql);\n\n        tEnv.toRetractStream(resultTable, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/SelectWhereTest2.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class SelectWhereTest2 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_number BIGINT,\\n\"\n                + \"    price        DECIMAL(32,2)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.order_number.min' = '10',\\n\"\n                + \"  'fields.order_number.max' = '11'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    order_number BIGINT,\\n\"\n                + \"    price        DECIMAL(32,2)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select * from source_table\\n\"\n                + \"where order_number = 10\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"ETL 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/SelectWhereTest3.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class SelectWhereTest3 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode()\n                .build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r\n                , Schema\n                        .newBuilder()\n                        .column(\"f0\", \"string\")\n                        .column(\"f1\", \"string\")\n                        .column(\"f2\", \"bigint\")\n                        .columnByExpression(\"proctime\", \"PROCTIME()\")\n                        .build());\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable);\n\n        String selectWhereSql = \"select f0 from source_table where f1 = 'b'\";\n\n        Table resultTable = tEnv.sqlQuery(selectWhereSql);\n\n        tEnv.toRetractStream(resultTable, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/SelectWhereTest4.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class SelectWhereTest4 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        // TODO 测试少一些字段\n\n        tEnv.createTemporaryView(\"source_table\", r);\n\n        String selectWhereSql = \"select a from source_table\";\n\n        Table resultTable = tEnv.sqlQuery(selectWhereSql);\n\n        tEnv.toRetractStream(resultTable, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(new TypeInformation[]{\n                    TypeInformation.of(String.class)\n                    , TypeInformation.of(String.class)\n                    , TypeInformation.of(Long.class)\n            }, new String[] {\"a\", \"b\", \"c\"});\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/SelectWhereTest5.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class SelectWhereTest5 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE Orders (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE target_table (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time timestamp(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO target_table\\n\"\n                + \"SELECT * FROM Orders\\n\"\n                + \"Where order_id > 3\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_01_select_where/StreamExecCalc$10.java",
    "content": "package flink.examples.sql._07.query._01_select_where;\n\npublic class StreamExecCalc$10 extends org.apache.flink.table.runtime.operators.TableStreamOperator\n        implements org.apache.flink.streaming.api.operators.OneInputStreamOperator {\n\n    private final Object[] references;\n    org.apache.flink.table.data.BoxedWrapperRowData out = new org.apache.flink.table.data.BoxedWrapperRowData(2);\n    private final org.apache.flink.streaming.runtime.streamrecord.StreamRecord outElement =\n            new org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);\n\n    public StreamExecCalc$10(\n            Object[] references,\n            org.apache.flink.streaming.runtime.tasks.StreamTask task,\n            org.apache.flink.streaming.api.graph.StreamConfig config,\n            org.apache.flink.streaming.api.operators.Output output,\n            org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeService) throws Exception {\n        this.references = references;\n\n        this.setup(task, config, output);\n        if (this instanceof org.apache.flink.streaming.api.operators.AbstractStreamOperator) {\n            ((org.apache.flink.streaming.api.operators.AbstractStreamOperator) this)\n                    .setProcessingTimeService(processingTimeService);\n        }\n    }\n\n    @Override\n    public void open() throws Exception {\n        super.open();\n\n    }\n\n    @Override\n    public void processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord element) throws Exception {\n        org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) element.getValue();\n\n        long field$1;\n        boolean isNull$1;\n        boolean isNull$2;\n        boolean result$3;\n        org.apache.flink.table.data.DecimalData field$4;\n        boolean isNull$4;\n\n\n        isNull$1 = in1.isNullAt(0);\n        field$1 = -1L;\n        if (!isNull$1) {\n            field$1 = in1.getLong(0);\n        }\n\n\n        isNull$2 = isNull$1 || false;\n        result$3 = false;\n        if (!isNull$2) {\n\n            result$3 = field$1 == ((long) 10L);\n\n        }\n\n        if (result$3) {\n            isNull$4 = in1.isNullAt(1);\n            field$4 = null;\n            if (!isNull$4) {\n                field$4 = in1.getDecimal(1, 32, 2);\n            }\n\n            out.setRowKind(in1.getRowKind());\n\n\n            if (false) {\n                out.setNullAt(0);\n            } else {\n                out.setLong(0, ((long) 10L));\n            }\n\n\n            if (isNull$4) {\n                out.setNullAt(1);\n            } else {\n                out.setNonPrimitiveValue(1, field$4);\n            }\n\n\n            output.collect(outElement.replace(out));\n\n        }\n\n    }\n\n\n    @Override\n    public void close() throws Exception {\n        super.close();\n\n    }\n\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_02_select_distinct/GroupAggsHandler$5.java",
    "content": "package flink.examples.sql._07.query._02_select_distinct;\n\n\npublic final class GroupAggsHandler$5 implements org.apache.flink.table.runtime.generated.AggsHandleFunction {\n\n    org.apache.flink.table.data.GenericRowData acc$2 = new org.apache.flink.table.data.GenericRowData(0);\n    org.apache.flink.table.data.GenericRowData acc$3 = new org.apache.flink.table.data.GenericRowData(0);\n    org.apache.flink.table.data.GenericRowData aggValue$4 = new org.apache.flink.table.data.GenericRowData(0);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    public GroupAggsHandler$5(Object[] references) throws Exception {\n\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(org.apache.flink.table.data.RowData otherAcc) throws Exception {\n\n        throw new RuntimeException(\"This function not require merge method, but the merge method is called.\");\n\n    }\n\n    @Override\n    public void setAccumulators(org.apache.flink.table.data.RowData acc) throws Exception {\n\n\n    }\n\n    @Override\n    public void resetAccumulators() throws Exception {\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n        acc$3 = new org.apache.flink.table.data.GenericRowData(0);\n        return acc$3;\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n        acc$2 = new org.apache.flink.table.data.GenericRowData(0);\n        return acc$2;\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue() throws Exception {\n        aggValue$4 = new org.apache.flink.table.data.GenericRowData(0);\n        return aggValue$4;\n    }\n\n    @Override\n    public void cleanup() throws Exception {\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_02_select_distinct/KeyProjection$0.java",
    "content": "package flink.examples.sql._07.query._02_select_distinct;\n\n\npublic class KeyProjection$0 implements\n        org.apache.flink.table.runtime.generated.Projection<org.apache.flink.table.data.RowData,\n                org.apache.flink.table.data.binary.BinaryRowData> {\n\n    org.apache.flink.table.data.binary.BinaryRowData out = new org.apache.flink.table.data.binary.BinaryRowData(1);\n    org.apache.flink.table.data.writer.BinaryRowWriter outWriter =\n            new org.apache.flink.table.data.writer.BinaryRowWriter(out);\n\n    public KeyProjection$0(Object[] references) throws Exception {\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.binary.BinaryRowData apply(org.apache.flink.table.data.RowData in1) {\n        org.apache.flink.table.data.binary.BinaryStringData field$1;\n        boolean isNull$1;\n\n\n        outWriter.reset();\n\n        isNull$1 = in1.isNullAt(0);\n        field$1 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$1) {\n            field$1 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(0));\n        }\n        if (isNull$1) {\n            outWriter.setNullAt(0);\n        } else {\n            outWriter.writeString(0, field$1);\n        }\n\n        outWriter.complete();\n\n\n        return out;\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_02_select_distinct/SelectDistinctTest.java",
    "content": "package flink.examples.sql._07.query._02_select_distinct;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n\npublic class SelectDistinctTest {\n\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r\n                , Schema\n                        .newBuilder()\n                        .column(\"f0\", \"string\")\n                        .column(\"f1\", \"string\")\n                        .column(\"f2\", \"bigint\")\n                        .columnByExpression(\"proctime\", \"PROCTIME()\")\n                        .build());\n\n        tEnv.createTemporaryView(\"source_table\", sourceTable);\n\n        String selectDistinctSql = \"select distinct f0 from source_table\";\n\n        Table resultTable = tEnv.sqlQuery(selectDistinctSql);\n\n        tEnv.toRetractStream(resultTable, Row.class).print();\n\n\n\n        String groupBySql = \"select f0 from source_table group by f0\";\n\n        Table resultTable1 = tEnv.sqlQuery(groupBySql);\n\n        tEnv.toRetractStream(resultTable1, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_02_select_distinct/SelectDistinctTest2.java",
    "content": "package flink.examples.sql._07.query._02_select_distinct;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class SelectDistinctTest2 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE Orders (\\n\"\n                + \"    id STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.id.length' = '1'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE target_table (\\n\"\n                + \"    id STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"INSERT into target_table\\n\"\n                + \"SELECT \\n\"\n                + \"    DISTINCT id \\n\"\n                + \"FROM Orders\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"SELECT DISTINCT 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_01_group_agg/GroupAggMiniBatchTest.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._01_group_agg;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n// https://www.jianshu.com/p/aa2e94628e24\n\npublic class GroupAggMiniBatchTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        Configuration configuration = tEnv.getConfig().getConfiguration();\n        // set low-level key-value options\n\n        configuration.setString(\"table.exec.mini-batch.enabled\", \"true\"); // enable mini-batch optimization\n        configuration.setString(\"table.exec.mini-batch.allow-latency\", \"5 s\"); // use 5 seconds to buffer input records\n        configuration.setString(\"table.exec.mini-batch.size\", \"5000\"); // the maximum number of records can be buffered by each aggregate operator task\n\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.order_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    count_result BIGINT,\\n\"\n                + \"    sum_result BIGINT,\\n\"\n                + \"    avg_result DOUBLE,\\n\"\n                + \"    min_result BIGINT,\\n\"\n                + \"    max_result BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUP AGG MINI BATCH 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_01_group_agg/GroupAggTest.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._01_group_agg;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class GroupAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.order_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n//        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n//                + \"    order_id STRING,\\n\"\n//                + \"    count_result BIGINT,\\n\"\n//                + \"    sum_result BIGINT,\\n\"\n//                + \"    avg_result DOUBLE,\\n\"\n//                + \"    min_result BIGINT,\\n\"\n//                + \"    max_result BIGINT\\n\"\n//                + \") WITH (\\n\"\n//                + \"  'connector' = 'print'\\n\"\n//                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    count_result BIGINT\\n\"\n//                + \"    sum_result BIGINT,\\n\"\n//                + \"    avg_result DOUBLE,\\n\"\n//                + \"    min_result BIGINT,\\n\"\n//                + \"    max_result BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n//        String selectWhereSql = \"insert into sink_table\\n\"\n//                + \"select order_id,\\n\"\n//                + \"       count(*) as count_result,\\n\"\n//                + \"       sum(price) as sum_result,\\n\"\n//                + \"       avg(price) as avg_result,\\n\"\n//                + \"       min(price) as min_result,\\n\"\n//                + \"       max(price) as max_result\\n\"\n//                + \"from source_table\\n\"\n//                + \"group by order_id\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select count(1) as count_result\\n\"\n                + \"from (\\n\"\n                + \"  select order_id,\\n\"\n                + \"         count(*) as count_result\\n\"\n                + \"  from source_table\\n\"\n                + \"  group by order_id\\n\"\n                + \")\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUP AGG 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_01_group_agg/GroupAggsHandler$39.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._01_group_agg;\n\n\npublic final class GroupAggsHandler$39 implements org.apache.flink.table.runtime.generated.AggsHandleFunction {\n\n    long agg0_count1;\n    boolean agg0_count1IsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg2_sum;\n    boolean agg2_sumIsNull;\n    long agg2_count;\n    boolean agg2_countIsNull;\n    long agg3_min;\n    boolean agg3_minIsNull;\n    long agg4_max;\n    boolean agg4_maxIsNull;\n    org.apache.flink.table.data.GenericRowData acc$2 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData acc$3 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData aggValue$38 = new org.apache.flink.table.data.GenericRowData(5);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    public GroupAggsHandler$39(Object[] references) throws Exception {\n\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        boolean isNull$10;\n        long result$11;\n        long field$12;\n        boolean isNull$12;\n        boolean isNull$13;\n        long result$14;\n        boolean isNull$17;\n        long result$18;\n        boolean isNull$20;\n        long result$21;\n        boolean isNull$23;\n        boolean result$24;\n        boolean isNull$28;\n        boolean result$29;\n        isNull$12 = accInput.isNullAt(1);\n        field$12 = -1L;\n        if (!isNull$12) {\n            field$12 = accInput.getLong(1);\n        }\n\n\n        isNull$10 = agg0_count1IsNull || false;\n        result$11 = -1L;\n        if (!isNull$10) {\n            result$11 = (long) (agg0_count1 + ((long) 1L));\n        }\n        agg0_count1 = result$11;\n        agg0_count1IsNull = isNull$10;\n\n        long result$16 = -1L;\n        boolean isNull$16;\n        if (isNull$12) {\n            isNull$16 = agg1_sumIsNull;\n            if (!isNull$16) {\n                result$16 = agg1_sum;\n            }\n        } else {\n            long result$15 = -1L;\n            boolean isNull$15;\n            if (agg1_sumIsNull) {\n                isNull$15 = isNull$12;\n                if (!isNull$15) {\n                    result$15 = field$12;\n                }\n            } else {\n                isNull$13 = agg1_sumIsNull || isNull$12;\n                result$14 = -1L;\n                if (!isNull$13) {\n                    result$14 = (long) (agg1_sum + field$12);\n                }\n\n                isNull$15 = isNull$13;\n                if (!isNull$15) {\n                    result$15 = result$14;\n                }\n            }\n            isNull$16 = isNull$15;\n            if (!isNull$16) {\n                result$16 = result$15;\n            }\n        }\n        agg1_sum = result$16;\n        ;\n        agg1_sumIsNull = isNull$16;\n\n\n        long result$19 = -1L;\n        boolean isNull$19;\n        if (isNull$12) {\n\n            isNull$19 = agg2_sumIsNull;\n            if (!isNull$19) {\n                result$19 = agg2_sum;\n            }\n        } else {\n\n\n            isNull$17 = agg2_sumIsNull || isNull$12;\n            result$18 = -1L;\n            if (!isNull$17) {\n\n                result$18 = (long) (agg2_sum + field$12);\n\n            }\n\n            isNull$19 = isNull$17;\n            if (!isNull$19) {\n                result$19 = result$18;\n            }\n        }\n        agg2_sum = result$19;\n        ;\n        agg2_sumIsNull = isNull$19;\n\n\n        long result$22 = -1L;\n        boolean isNull$22;\n        if (isNull$12) {\n\n            isNull$22 = agg2_countIsNull;\n            if (!isNull$22) {\n                result$22 = agg2_count;\n            }\n        } else {\n\n\n            isNull$20 = agg2_countIsNull || false;\n            result$21 = -1L;\n            if (!isNull$20) {\n\n                result$21 = (long) (agg2_count + ((long) 1L));\n\n            }\n\n            isNull$22 = isNull$20;\n            if (!isNull$22) {\n                result$22 = result$21;\n            }\n        }\n        agg2_count = result$22;\n        ;\n        agg2_countIsNull = isNull$22;\n\n\n        long result$27 = -1L;\n        boolean isNull$27;\n        if (isNull$12) {\n\n            isNull$27 = agg3_minIsNull;\n            if (!isNull$27) {\n                result$27 = agg3_min;\n            }\n        } else {\n            long result$26 = -1L;\n            boolean isNull$26;\n            if (agg3_minIsNull) {\n\n                isNull$26 = isNull$12;\n                if (!isNull$26) {\n                    result$26 = field$12;\n                }\n            } else {\n                isNull$23 = isNull$12 || agg3_minIsNull;\n                result$24 = false;\n                if (!isNull$23) {\n\n                    result$24 = field$12 < agg3_min;\n\n                }\n\n                long result$25 = -1L;\n                boolean isNull$25;\n                if (result$24) {\n\n                    isNull$25 = isNull$12;\n                    if (!isNull$25) {\n                        result$25 = field$12;\n                    }\n                } else {\n\n                    isNull$25 = agg3_minIsNull;\n                    if (!isNull$25) {\n                        result$25 = agg3_min;\n                    }\n                }\n                isNull$26 = isNull$25;\n                if (!isNull$26) {\n                    result$26 = result$25;\n                }\n            }\n            isNull$27 = isNull$26;\n            if (!isNull$27) {\n                result$27 = result$26;\n            }\n        }\n        agg3_min = result$27;\n        ;\n        agg3_minIsNull = isNull$27;\n\n\n        long result$32 = -1L;\n        boolean isNull$32;\n        if (isNull$12) {\n\n            isNull$32 = agg4_maxIsNull;\n            if (!isNull$32) {\n                result$32 = agg4_max;\n            }\n        } else {\n            long result$31 = -1L;\n            boolean isNull$31;\n            if (agg4_maxIsNull) {\n\n                isNull$31 = isNull$12;\n                if (!isNull$31) {\n                    result$31 = field$12;\n                }\n            } else {\n                isNull$28 = isNull$12 || agg4_maxIsNull;\n                result$29 = false;\n                if (!isNull$28) {\n\n                    result$29 = field$12 > agg4_max;\n\n                }\n\n                long result$30 = -1L;\n                boolean isNull$30;\n                if (result$29) {\n\n                    isNull$30 = isNull$12;\n                    if (!isNull$30) {\n                        result$30 = field$12;\n                    }\n                } else {\n\n                    isNull$30 = agg4_maxIsNull;\n                    if (!isNull$30) {\n                        result$30 = agg4_max;\n                    }\n                }\n                isNull$31 = isNull$30;\n                if (!isNull$31) {\n                    result$31 = result$30;\n                }\n            }\n            isNull$32 = isNull$31;\n            if (!isNull$32) {\n                result$32 = result$31;\n            }\n        }\n        agg4_max = result$32;\n        ;\n        agg4_maxIsNull = isNull$32;\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(org.apache.flink.table.data.RowData otherAcc) throws Exception {\n\n        throw new RuntimeException(\"This function not require merge method, but the merge method is called.\");\n\n    }\n\n    @Override\n    public void setAccumulators(org.apache.flink.table.data.RowData acc) throws Exception {\n\n        long field$4;\n        boolean isNull$4;\n        long field$5;\n        boolean isNull$5;\n        long field$6;\n        boolean isNull$6;\n        long field$7;\n        boolean isNull$7;\n        long field$8;\n        boolean isNull$8;\n        long field$9;\n        boolean isNull$9;\n        isNull$8 = acc.isNullAt(4);\n        field$8 = -1L;\n        if (!isNull$8) {\n            field$8 = acc.getLong(4);\n        }\n        isNull$4 = acc.isNullAt(0);\n        field$4 = -1L;\n        if (!isNull$4) {\n            field$4 = acc.getLong(0);\n        }\n        isNull$5 = acc.isNullAt(1);\n        field$5 = -1L;\n        if (!isNull$5) {\n            field$5 = acc.getLong(1);\n        }\n        isNull$7 = acc.isNullAt(3);\n        field$7 = -1L;\n        if (!isNull$7) {\n            field$7 = acc.getLong(3);\n        }\n        isNull$9 = acc.isNullAt(5);\n        field$9 = -1L;\n        if (!isNull$9) {\n            field$9 = acc.getLong(5);\n        }\n        isNull$6 = acc.isNullAt(2);\n        field$6 = -1L;\n        if (!isNull$6) {\n            field$6 = acc.getLong(2);\n        }\n\n        agg0_count1 = field$4;\n        ;\n        agg0_count1IsNull = isNull$4;\n\n\n        agg1_sum = field$5;\n        ;\n        agg1_sumIsNull = isNull$5;\n\n\n        agg2_sum = field$6;\n        ;\n        agg2_sumIsNull = isNull$6;\n\n\n        agg2_count = field$7;\n        ;\n        agg2_countIsNull = isNull$7;\n\n\n        agg3_min = field$8;\n        ;\n        agg3_minIsNull = isNull$8;\n\n\n        agg4_max = field$9;\n        ;\n        agg4_maxIsNull = isNull$9;\n\n\n    }\n\n    @Override\n    public void resetAccumulators() throws Exception {\n\n\n        agg0_count1 = ((long) 0L);\n        agg0_count1IsNull = false;\n\n\n        agg1_sum = ((long) -1L);\n        agg1_sumIsNull = true;\n\n\n        agg2_sum = ((long) 0L);\n        agg2_sumIsNull = false;\n\n\n        agg2_count = ((long) 0L);\n        agg2_countIsNull = false;\n\n\n        agg3_min = ((long) -1L);\n        agg3_minIsNull = true;\n\n\n        agg4_max = ((long) -1L);\n        agg4_maxIsNull = true;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$3 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (agg0_count1IsNull) {\n            acc$3.setField(0, null);\n        } else {\n            acc$3.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$3.setField(1, null);\n        } else {\n            acc$3.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_sumIsNull) {\n            acc$3.setField(2, null);\n        } else {\n            acc$3.setField(2, agg2_sum);\n        }\n\n\n        if (agg2_countIsNull) {\n            acc$3.setField(3, null);\n        } else {\n            acc$3.setField(3, agg2_count);\n        }\n\n\n        if (agg3_minIsNull) {\n            acc$3.setField(4, null);\n        } else {\n            acc$3.setField(4, agg3_min);\n        }\n\n\n        if (agg4_maxIsNull) {\n            acc$3.setField(5, null);\n        } else {\n            acc$3.setField(5, agg4_max);\n        }\n\n\n        return acc$3;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$2 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (false) {\n            acc$2.setField(0, null);\n        } else {\n            acc$2.setField(0, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$2.setField(1, null);\n        } else {\n            acc$2.setField(1, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$2.setField(2, null);\n        } else {\n            acc$2.setField(2, ((long) 0L));\n        }\n\n\n        if (false) {\n            acc$2.setField(3, null);\n        } else {\n            acc$2.setField(3, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$2.setField(4, null);\n        } else {\n            acc$2.setField(4, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$2.setField(5, null);\n        } else {\n            acc$2.setField(5, ((long) -1L));\n        }\n\n\n        return acc$2;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue() throws Exception {\n\n        boolean isNull$33;\n        boolean result$34;\n        boolean isNull$35;\n        long result$36;\n\n        aggValue$38 = new org.apache.flink.table.data.GenericRowData(5);\n\n\n        if (agg0_count1IsNull) {\n            aggValue$38.setField(0, null);\n        } else {\n            aggValue$38.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            aggValue$38.setField(1, null);\n        } else {\n            aggValue$38.setField(1, agg1_sum);\n        }\n\n\n        isNull$33 = agg2_countIsNull || false;\n        result$34 = false;\n        if (!isNull$33) {\n\n            result$34 = agg2_count == ((long) 0L);\n\n        }\n\n        long result$37 = -1L;\n        boolean isNull$37;\n        if (result$34) {\n\n            isNull$37 = true;\n            if (!isNull$37) {\n                result$37 = ((long) -1L);\n            }\n        } else {\n\n\n            isNull$35 = agg2_sumIsNull || agg2_countIsNull;\n            result$36 = -1L;\n            if (!isNull$35) {\n\n                result$36 = (long) (agg2_sum / agg2_count);\n\n            }\n\n            isNull$37 = isNull$35;\n            if (!isNull$37) {\n                result$37 = result$36;\n            }\n        }\n        if (isNull$37) {\n            aggValue$38.setField(2, null);\n        } else {\n            aggValue$38.setField(2, result$37);\n        }\n\n\n        if (agg3_minIsNull) {\n            aggValue$38.setField(3, null);\n        } else {\n            aggValue$38.setField(3, agg3_min);\n        }\n\n\n        if (agg4_maxIsNull) {\n            aggValue$38.setField(4, null);\n        } else {\n            aggValue$38.setField(4, agg4_max);\n        }\n\n\n        return aggValue$38;\n\n    }\n\n    @Override\n    public void cleanup() throws Exception {\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_02_count_distinct/CountDistinctGroupAggTest.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._02_count_distinct;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class CountDistinctGroupAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    uv BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"       count(distinct user_id) as uv\\n\"\n                + \"from source_table\\n\"\n                + \"group by dim\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"COUNT DISTINCT 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_02_count_distinct/GroupAggsHandler$17.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._02_count_distinct;\n\n\npublic final class GroupAggsHandler$17 implements org.apache.flink.table.runtime.generated.AggsHandleFunction {\n\n    long agg0_count;\n    boolean agg0_countIsNull;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$2;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$3;\n    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n    org.apache.flink.table.data.GenericRowData acc$5 = new org.apache.flink.table.data.GenericRowData(2);\n    org.apache.flink.table.data.GenericRowData acc$7 = new org.apache.flink.table.data.GenericRowData(2);\n    org.apache.flink.table.data.GenericRowData aggValue$16 = new org.apache.flink.table.data.GenericRowData(1);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    public GroupAggsHandler$17(Object[] references) throws Exception {\n        externalSerializer$2 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[0]));\n        externalSerializer$3 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$2, externalSerializer$3);\n        distinctAcc_0_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n\n        distinct_view_0 = distinctAcc_0_dataview;\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        long field$9;\n        boolean isNull$9;\n        boolean isNull$11;\n        long result$12;\n        isNull$9 = accInput.isNullAt(1);\n        field$9 = -1L;\n        if (!isNull$9) {\n            field$9 = accInput.getLong(1);\n        }\n\n\n        Long distinctKey$10 = (Long) field$9;\n        if (isNull$9) {\n            distinctKey$10 = null;\n        }\n\n        Long value$14 = (Long) distinct_view_0.get(distinctKey$10);\n        if (value$14 == null) {\n            value$14 = 0L;\n        }\n\n        boolean is_distinct_value_changed_0 = false;\n\n        long existed$15 = ((long) value$14) & (1L << 0);\n        if (existed$15 == 0) {  // not existed\n            value$14 = ((long) value$14) | (1L << 0);\n            is_distinct_value_changed_0 = true;\n\n            long result$13 = -1L;\n            boolean isNull$13;\n            if (isNull$9) {\n\n                isNull$13 = agg0_countIsNull;\n                if (!isNull$13) {\n                    result$13 = agg0_count;\n                }\n            } else {\n\n\n                isNull$11 = agg0_countIsNull || false;\n                result$12 = -1L;\n                if (!isNull$11) {\n\n                    result$12 = (long) (agg0_count + ((long) 1L));\n\n                }\n\n                isNull$13 = isNull$11;\n                if (!isNull$13) {\n                    result$13 = result$12;\n                }\n            }\n            agg0_count = result$13;\n            ;\n            agg0_countIsNull = isNull$13;\n\n        }\n\n        if (is_distinct_value_changed_0) {\n            distinct_view_0.put(distinctKey$10, value$14);\n        }\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(org.apache.flink.table.data.RowData otherAcc) throws Exception {\n\n        throw new RuntimeException(\"This function not require merge method, but the merge method is called.\");\n\n    }\n\n    @Override\n    public void setAccumulators(org.apache.flink.table.data.RowData acc) throws Exception {\n\n        long field$8;\n        boolean isNull$8;\n        isNull$8 = acc.isNullAt(0);\n        field$8 = -1L;\n        if (!isNull$8) {\n            field$8 = acc.getLong(0);\n        }\n\n        distinct_view_0 = distinctAcc_0_dataview;\n\n        agg0_count = field$8;\n        ;\n        agg0_countIsNull = isNull$8;\n\n\n    }\n\n    @Override\n    public void resetAccumulators() throws Exception {\n\n\n        agg0_count = ((long) 0L);\n        agg0_countIsNull = false;\n\n        distinct_view_0.clear();\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$7 = new org.apache.flink.table.data.GenericRowData(2);\n\n\n        if (agg0_countIsNull) {\n            acc$7.setField(0, null);\n        } else {\n            acc$7.setField(0, agg0_count);\n        }\n\n\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$6 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n\n        if (false) {\n            acc$7.setField(1, null);\n        } else {\n            acc$7.setField(1, distinct_acc$6);\n        }\n\n\n        return acc$7;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$5 = new org.apache.flink.table.data.GenericRowData(2);\n\n\n        if (false) {\n            acc$5.setField(0, null);\n        } else {\n            acc$5.setField(0, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.api.dataview.MapView mapview$4 = new org.apache.flink.table.api.dataview.MapView();\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$4 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$4);\n\n        if (false) {\n            acc$5.setField(1, null);\n        } else {\n            acc$5.setField(1, distinct_acc$4);\n        }\n\n\n        return acc$5;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue() throws Exception {\n\n\n        aggValue$16 = new org.apache.flink.table.data.GenericRowData(1);\n\n\n        if (agg0_countIsNull) {\n            aggValue$16.setField(0, null);\n        } else {\n            aggValue$16.setField(0, agg0_count);\n        }\n\n\n        return aggValue$16;\n\n    }\n\n    @Override\n    public void cleanup() throws Exception {\n\n        distinctAcc_0_dataview.clear();\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_03_grouping_sets/GroupingSetsEqualsGroupAggUnionAllGroupAggTest2.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._03_grouping_sets;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class GroupingSetsEqualsGroupAggUnionAllGroupAggTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"     supplier_id,\\n\"\n                + \"     product_id,\\n\"\n                + \"     COUNT(*) AS total\\n\"\n                + \"FROM (VALUES\\n\"\n                + \"     ('supplier1', 'product1', 4),\\n\"\n                + \"     ('supplier1', 'product2', 3),\\n\"\n                + \"     ('supplier2', 'product3', 3),\\n\"\n                + \"     ('supplier2', 'product4', 4))\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY supplier_id, product_id\\n\"\n                + \"UNION ALL\\n\"\n                + \"SELECT\\n\"\n                + \"     supplier_id,\\n\"\n                + \"     cast(null as string) as product_id,\\n\"\n                + \"     COUNT(*) AS total\\n\"\n                + \"FROM (VALUES\\n\"\n                + \"     ('supplier1', 'product1', 4),\\n\"\n                + \"     ('supplier1', 'product2', 3),\\n\"\n                + \"     ('supplier2', 'product3', 3),\\n\"\n                + \"     ('supplier2', 'product4', 4))\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY supplier_id\\n\"\n                + \"UNION ALL\\n\"\n                + \"SELECT\\n\"\n                + \"     cast(null as string) AS supplier_id,\\n\"\n                + \"     cast(null as string) AS product_id,\\n\"\n                + \"     COUNT(*) AS total\\n\"\n                + \"FROM (VALUES\\n\"\n                + \"     ('supplier1', 'product1', 4),\\n\"\n                + \"     ('supplier1', 'product2', 3),\\n\"\n                + \"     ('supplier2', 'product3', 3),\\n\"\n                + \"     ('supplier2', 'product4', 4))\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUPING SETS 等同于 GROUP AGG UNION ALL 案例\");\n\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_03_grouping_sets/GroupingSetsGroupAggTest.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._03_grouping_sets;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class GroupingSetsGroupAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'fields.supplier_id.length' = '1',\\n\"\n                + \"  'fields.product_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"INSERT INTO sink_table\\n\"\n                + \"SELECT supplier_id,\\n\"\n                + \"       product_id,\\n\"\n                + \"       sum(price) as total\\n\"\n                + \"FROM source_table\\n\"\n                + \"GROUP BY GROUPING SETS ((supplier_id, product_id), (supplier_id), ())\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUPING SETS 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_03_grouping_sets/GroupingSetsGroupAggTest2.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._03_grouping_sets;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class GroupingSetsGroupAggTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"     supplier_id,\\n\"\n                + \"     product_id,\\n\"\n                + \"     COUNT(*) AS total\\n\"\n                + \"FROM (VALUES\\n\"\n                + \"     ('supplier1', 'product1', 4),\\n\"\n                + \"     ('supplier1', 'product2', 3),\\n\"\n                + \"     ('supplier2', 'product3', 3),\\n\"\n                + \"     ('supplier2', 'product4', 4))\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY GROUPING SETS ((supplier_id, product_id), (supplier_id), ())\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUPING SETS 案例\");\n\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_03_grouping_sets/StreamExecExpand$20.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._03_grouping_sets;\n\n\npublic class StreamExecExpand$20 extends org.apache.flink.table.runtime.operators.TableStreamOperator\n        implements org.apache.flink.streaming.api.operators.OneInputStreamOperator {\n\n    private final Object[] references;\n    private transient org.apache.flink.table.runtime.typeutils.StringDataSerializer typeSerializer$15;\n    private transient org.apache.flink.table.runtime.typeutils.StringDataSerializer typeSerializer$18;\n    org.apache.flink.table.data.BoxedWrapperRowData out = new org.apache.flink.table.data.BoxedWrapperRowData(3);\n    private final org.apache.flink.streaming.runtime.streamrecord.StreamRecord outElement =\n            new org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);\n\n    public StreamExecExpand$20(\n            Object[] references,\n            org.apache.flink.streaming.runtime.tasks.StreamTask task,\n            org.apache.flink.streaming.api.graph.StreamConfig config,\n            org.apache.flink.streaming.api.operators.Output output,\n            org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeService) throws Exception {\n        this.references = references;\n        typeSerializer$15 = (((org.apache.flink.table.runtime.typeutils.StringDataSerializer) references[0]));\n        typeSerializer$18 = (((org.apache.flink.table.runtime.typeutils.StringDataSerializer) references[1]));\n        this.setup(task, config, output);\n        if (this instanceof org.apache.flink.streaming.api.operators.AbstractStreamOperator) {\n            ((org.apache.flink.streaming.api.operators.AbstractStreamOperator) this)\n                    .setProcessingTimeService(processingTimeService);\n        }\n    }\n\n    @Override\n    public void open() throws Exception {\n        super.open();\n\n    }\n\n    @Override\n    public void processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord element) throws Exception {\n        org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) element.getValue();\n\n        org.apache.flink.table.data.binary.BinaryStringData field$14;\n        boolean isNull$14;\n        org.apache.flink.table.data.binary.BinaryStringData field$16;\n        org.apache.flink.table.data.binary.BinaryStringData field$17;\n        boolean isNull$17;\n        org.apache.flink.table.data.binary.BinaryStringData field$19;\n\n        isNull$14 = in1.isNullAt(0);\n        field$14 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$14) {\n            field$14 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(0));\n        }\n        field$16 = field$14;\n        if (!isNull$14) {\n            field$16 = (org.apache.flink.table.data.binary.BinaryStringData) (typeSerializer$15.copy(field$16));\n        }\n\n\n        isNull$17 = in1.isNullAt(1);\n        field$17 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$17) {\n            field$17 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(1));\n        }\n        field$19 = field$17;\n        if (!isNull$17) {\n            field$19 = (org.apache.flink.table.data.binary.BinaryStringData) (typeSerializer$18.copy(field$19));\n        }\n\n        out.setRowKind(in1.getRowKind());\n\n\n        if (isNull$14) {\n            out.setNullAt(0);\n        } else {\n            out.setNonPrimitiveValue(0, field$16);\n        }\n\n\n        if (isNull$17) {\n            out.setNullAt(1);\n        } else {\n            out.setNonPrimitiveValue(1, field$19);\n        }\n\n\n        if (false) {\n            out.setNullAt(2);\n        } else {\n            out.setLong(2, ((long) 0L));\n        }\n\n\n        output.collect(outElement.replace(out));\n        out.setRowKind(in1.getRowKind());\n\n\n        if (isNull$14) {\n            out.setNullAt(0);\n        } else {\n            out.setNonPrimitiveValue(0, field$16);\n        }\n\n\n        if (true) {\n            out.setNullAt(1);\n        } else {\n            out.setNonPrimitiveValue(1,\n                    ((org.apache.flink.table.data.binary.BinaryStringData) org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8));\n        }\n\n\n        if (false) {\n            out.setNullAt(2);\n        } else {\n            out.setLong(2, ((long) 1L));\n        }\n\n\n        output.collect(outElement.replace(out));\n        out.setRowKind(in1.getRowKind());\n\n\n        if (true) {\n            out.setNullAt(0);\n        } else {\n            out.setNonPrimitiveValue(0,\n                    ((org.apache.flink.table.data.binary.BinaryStringData) org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8));\n        }\n\n\n        if (true) {\n            out.setNullAt(1);\n        } else {\n            out.setNonPrimitiveValue(1,\n                    ((org.apache.flink.table.data.binary.BinaryStringData) org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8));\n        }\n\n\n        if (false) {\n            out.setNullAt(2);\n        } else {\n            out.setLong(2, ((long) 3L));\n        }\n\n\n        output.collect(outElement.replace(out));\n    }\n\n\n    @Override\n    public void close() throws Exception {\n        super.close();\n\n    }\n\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_04_cube/CubeGroupAggTest.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._04_cube;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class CubeGroupAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'fields.supplier_id.length' = '1',\\n\"\n                + \"  'fields.product_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"SELECT supplier_id, product_id, COUNT(*) as total\\n\"\n                + \"FROM source_table\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY CUBE (supplier_id, product_id)\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"CUBE 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_04_cube/CubeGroupAggTest2.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._04_cube;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class CubeGroupAggTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"SELECT supplier_id, product_id, COUNT(*) as total\\n\"\n                + \"FROM (VALUES\\n\"\n                + \"    ('supplier1', 'product1', 4),\\n\"\n                + \"    ('supplier1', 'product2', 3),\\n\"\n                + \"    ('supplier2', 'product3', 3),\\n\"\n                + \"    ('supplier2', 'product4', 4))\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY CUBE (supplier_id, product_id)\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"CUBE 案例\");\n\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_05_rollup/RollUpGroupAggTest.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._05_rollup;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class RollUpGroupAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'fields.supplier_id.length' = '1',\\n\"\n                + \"  'fields.product_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"SELECT supplier_id, product_id, COUNT(*) as total\\n\"\n                + \"FROM source_table\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY ROLLUP (supplier_id, product_id)\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"ROLLUP 案例\");\n\n        tEnv.executeSql(sourceSql);\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_03_group_agg/_05_rollup/RollUpGroupAggTest2.java",
    "content": "package flink.examples.sql._07.query._03_group_agg._05_rollup;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class RollUpGroupAggTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    supplier_id STRING,\\n\"\n                + \"    product_id STRING,\\n\"\n                + \"    total BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"SELECT supplier_id, product_id, COUNT(*) as total\\n\"\n                + \"FROM (VALUES\\n\"\n                + \"    ('supplier1', 'product1', 4),\\n\"\n                + \"    ('supplier1', 'product2', 3),\\n\"\n                + \"    ('supplier2', 'product3', 3),\\n\"\n                + \"    ('supplier2', 'product4', 4))\\n\"\n                + \"AS Products(supplier_id, product_id, rating)\\n\"\n                + \"GROUP BY ROLLUP (supplier_id, product_id)\";\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"ROLLUP 案例\");\n\n        tEnv.executeSql(sinkSql);\n        tEnv.executeSql(selectWhereSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/TumbleWindow2GroupAggTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindow2GroupAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\", \"--table.optimizer.agg-phase-strategy\", \"TWO_PHASE\"});\n\n        String sql = \"-- 数据源表\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    -- 维度数据\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    -- 用户 id\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    -- 用户\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    -- 事件时间戳\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    -- watermark 设置\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"-- 数据汇表\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"-- 数据处理逻辑\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"       sum(bucket_pv) as pv,\\n\"\n                + \"       sum(bucket_sum_price) as sum_price,\\n\"\n                + \"       max(bucket_max_price) as max_price,\\n\"\n                + \"       min(bucket_min_price) as min_price,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_start) as window_start\\n\"\n                + \"from (\\n\"\n                + \"     select dim,\\n\"\n                + \"            count(*) as bucket_pv,\\n\"\n                + \"            sum(price) as bucket_sum_price,\\n\"\n                + \"            max(price) as bucket_max_price,\\n\"\n                + \"            min(price) as bucket_min_price,\\n\"\n                + \"            -- 计算 uv 数\\n\"\n                + \"            count(distinct user_id) as bucket_uv,\\n\"\n                + \"            cast((UNIX_TIMESTAMP(CAST(row_time AS STRING))) / 60 as bigint) as window_start\\n\"\n                + \"     from source_table\\n\"\n                + \"     group by\\n\"\n                + \"            -- 按照用户 id 进行分桶，防止数据倾斜\\n\"\n                + \"            mod(user_id, 1024),\\n\"\n                + \"            dim,\\n\"\n                + \"            cast((UNIX_TIMESTAMP(CAST(row_time AS STRING))) / 60 as bigint)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"         window_start\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW 案例\");\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/TumbleWindowTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindowTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--table.optimizer.agg-phase-strategy\", \"TWO_PHASE\"});\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10000',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   sum(bucket_sum_price) as sum_price,\\n\"\n                + \"\\t   max(bucket_max_price) as max_price,\\n\"\n                + \"\\t   min(bucket_min_price) as min_price,\\n\"\n                + \"\\t   sum(bucket_uv) as uv,\\n\"\n                + \"\\t   max(window_start) as window_start\\n\"\n                + \"from (\\n\"\n                + \"\\t SELECT dim,\\n\"\n                + \"\\t \\t    UNIX_TIMESTAMP(CAST(window_start AS STRING)) * 1000 as window_start, \\n\"\n                + \"\\t        window_end, \\n\"\n                + \"\\t        count(*) as bucket_pv,\\n\"\n                + \"\\t        sum(price) as bucket_sum_price,\\n\"\n                + \"\\t        max(price) as bucket_max_price,\\n\"\n                + \"\\t        min(price) as bucket_min_price,\\n\"\n                + \"\\t        count(distinct user_id) as bucket_uv\\n\"\n                + \"\\t FROM TABLE(TUMBLE(\\n\"\n                + \"\\t \\t\\t\\tTABLE source_table\\n\"\n                + \"\\t \\t\\t\\t, DESCRIPTOR(row_time)\\n\"\n                + \"\\t \\t\\t\\t, INTERVAL '60' SECOND))\\n\"\n                + \"\\t GROUP BY window_start, \\n\"\n                + \"\\t  \\t\\t  window_end,\\n\"\n                + \"\\t\\t\\t  dim,\\n\"\n                + \"\\t\\t\\t  mod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"\\t\\t window_start\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n\n        /**\n         * 两阶段聚合\n         * 本地 agg：{@link org.apache.flink.table.runtime.operators.aggregate.window.LocalSlicingWindowAggOperator}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.LocalAggCombiner}\n         *\n         * key agg；{@link org.apache.flink.table.runtime.operators.window.slicing.SlicingWindowOperator}\n         *    -> {@link org.apache.flink.table.runtime.operators.aggregate.window.processors.SliceUnsharedWindowAggProcessor}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner}\n         */\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/TumbleWindowTest2.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindowTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--table.optimizer.agg-phase-strategy\", \"ONE_PHASE\"});\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10000',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql1 = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   sum(bucket_sum_price) as sum_price,\\n\"\n                + \"\\t   max(bucket_max_price) as max_price,\\n\"\n                + \"\\t   min(bucket_min_price) as min_price,\\n\"\n                + \"\\t   sum(bucket_uv) as uv,\\n\"\n                + \"\\t   UNIX_TIMESTAMP(CAST(window_start AS STRING)) as window_start\\n\"\n                + \"from TABLE(\\n\"\n                + \"\\t TUMBLE(\\n\"\n                + \"\\t\\tTABLE (\\n\"\n                + \"\\t\\t\\tSELECT \\n\"\n                + \"\\t\\t\\t\\tdim,\\n\"\n                + \"\\t\\t\\t\\twindow_time as rowtime,\\n\"\n                + \"\\t\\t\\t    count(*) as bucket_pv,\\n\"\n                + \"\\t\\t\\t    sum(price) as bucket_sum_price,\\n\"\n                + \"\\t\\t\\t    max(price) as bucket_max_price,\\n\"\n                + \"\\t\\t\\t    min(price) as bucket_min_price,\\n\"\n                + \"\\t\\t\\t    count(distinct user_id) as bucket_uv\\n\"\n                + \"\\t\\t\\tFROM TABLE(TUMBLE(\\n\"\n                + \"\\t\\t\\t\\t\\t\\tTABLE source_table\\n\"\n                + \"\\t\\t\\t\\t\\t\\t, DESCRIPTOR(row_time)\\n\"\n                + \"\\t\\t\\t\\t\\t\\t, INTERVAL '60' SECOND))\\n\"\n                + \"\\t\\t\\tGROUP BY \\n\"\n                + \"\\t\\t\\t\\twindow_time,window_start, window_end,\\n\"\n                + \"\\t\\t\\t\\tdim,\\n\"\n                + \"\\t\\t\\t\\tmod(user_id, 1024)\\n\"\n                + \"\\t\\t)\\n\"\n                + \"\\t\\t, DESCRIPTOR(rowtime)\\n\"\n                + \"\\t\\t, INTERVAL '60' SECOND)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"\\t\\t window_start, window_end\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"with tmp as (\\n\"\n                + \"\\tSELECT \\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\twindow_time as t,\\n\"\n                + \"\\t    count(*) as bucket_pv,\\n\"\n                + \"\\t    sum(price) as bucket_sum_price,\\n\"\n                + \"\\t    max(price) as bucket_max_price,\\n\"\n                + \"\\t    min(price) as bucket_min_price,\\n\"\n                + \"\\t    count(distinct user_id) as bucket_uv\\n\"\n                + \"\\tFROM TABLE(TUMBLE(\\n\"\n                + \"\\t\\t\\t\\tTABLE source_table\\n\"\n                + \"\\t\\t\\t\\t, DESCRIPTOR(row_time)\\n\"\n                + \"\\t\\t\\t\\t, INTERVAL '60' SECOND))\\n\"\n                + \"\\tGROUP BY \\n\"\n                + \"\\t\\twindow_time,window_start, window_end, \\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\tmod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   sum(bucket_sum_price) as sum_price,\\n\"\n                + \"\\t   max(bucket_max_price) as max_price,\\n\"\n                + \"\\t   min(bucket_min_price) as min_price,\\n\"\n                + \"\\t   sum(bucket_uv) as uv,\\n\"\n                + \"\\t   UNIX_TIMESTAMP(CAST(window_start AS STRING)) as window_start\\n\"\n                + \"from TABLE(\\n\"\n                + \"\\t TUMBLE(\\n\"\n                + \"\\t\\tTABLE tmp\\n\"\n                + \"\\t\\t, DESCRIPTOR(t)\\n\"\n                + \"\\t\\t, INTERVAL '60' SECOND)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"\\t\\t window_start, window_end\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n\n        /**\n         * 两阶段聚合\n         * 本地 agg：{@link org.apache.flink.table.runtime.operators.aggregate.window.LocalSlicingWindowAggOperator}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.LocalAggCombiner}\n         *\n         * key agg；{@link org.apache.flink.table.runtime.operators.window.slicing.SlicingWindowOperator}\n         *    -> {@link org.apache.flink.table.runtime.operators.aggregate.window.processors.SliceUnsharedWindowAggProcessor}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner}\n         */\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/TumbleWindowTest3.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindowTest3 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--table.optimizer.agg-phase-strategy\", \"TWO_PHASE\"});\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10000',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   sum(bucket_sum_price) as sum_price,\\n\"\n                + \"\\t   max(bucket_max_price) as max_price,\\n\"\n                + \"\\t   min(bucket_min_price) as min_price,\\n\"\n                + \"\\t   sum(bucket_uv) as uv,\\n\"\n                + \"\\t   UNIX_TIMESTAMP(CAST(TUMBLE_START(rowtime, INTERVAL '5' MINUTE) AS STRING)) as window_start\\n\"\n                + \"from (\\n\"\n                + \"\\tSELECT \\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\twindow_time as rowtime,\\n\"\n                + \"\\t    count(*) as bucket_pv,\\n\"\n                + \"\\t    sum(price) as bucket_sum_price,\\n\"\n                + \"\\t    max(price) as bucket_max_price,\\n\"\n                + \"\\t    min(price) as bucket_min_price,\\n\"\n                + \"\\t    count(distinct user_id) as bucket_uv\\n\"\n                + \"\\tFROM TABLE(TUMBLE(\\n\"\n                + \"\\t\\t\\t\\tTABLE source_table\\n\"\n                + \"\\t\\t\\t\\t, DESCRIPTOR(row_time)\\n\"\n                + \"\\t\\t\\t\\t, INTERVAL '60' SECOND))\\n\"\n                + \"\\tGROUP BY \\n\"\n                + \"\\t\\twindow_time,window_start, window_end, \\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\tmod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"  \\t\\tTUMBLE(rowtime, INTERVAL '5' MINUTE)\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n\n        /**\n         * 两阶段聚合\n         * 本地 agg：{@link org.apache.flink.table.runtime.operators.aggregate.window.LocalSlicingWindowAggOperator}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.LocalAggCombiner}\n         *\n         * key agg；{@link org.apache.flink.table.runtime.operators.window.slicing.SlicingWindowOperator}\n         *    -> {@link org.apache.flink.table.runtime.operators.aggregate.window.processors.SliceUnsharedWindowAggProcessor}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner}\n         */\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/TumbleWindowTest4.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindowTest4 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--table.optimizer.agg-phase-strategy\", \"TWO_PHASE\"});\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10000',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   sum(bucket_sum_price) as sum_price,\\n\"\n                + \"\\t   max(bucket_max_price) as max_price,\\n\"\n                + \"\\t   min(bucket_min_price) as min_price,\\n\"\n                + \"\\t   sum(bucket_uv) as uv,\\n\"\n                + \"\\t   UNIX_TIMESTAMP(CAST(TUMBLE_START(rowtime, INTERVAL '5' MINUTE) AS STRING)) as rowtime\\n\"\n                + \"from (\\n\"\n                + \"\\tSELECT \\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\tTUMBLE_ROWTIME(row_time, INTERVAL '5' MINUTE) as rowtime,\\n\"\n                + \"\\t    count(*) as bucket_pv,\\n\"\n                + \"\\t    sum(price) as bucket_sum_price,\\n\"\n                + \"\\t    max(price) as bucket_max_price,\\n\"\n                + \"\\t    min(price) as bucket_min_price,\\n\"\n                + \"\\t    count(distinct user_id) as bucket_uv\\n\"\n                + \"\\tFROM source_table\\n\"\n                + \"\\tGROUP BY \\n\"\n                + \"\\t\\tTUMBLE(row_time, INTERVAL '5' MINUTE),\\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\tmod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"  \\t\\tTUMBLE(rowtime, INTERVAL '5' MINUTE)\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n\n        /**\n         * 两阶段聚合\n         * 本地 agg：{@link org.apache.flink.table.runtime.operators.aggregate.window.LocalSlicingWindowAggOperator}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.LocalAggCombiner}\n         *\n         * key agg；{@link org.apache.flink.table.runtime.operators.window.slicing.SlicingWindowOperator}\n         *    -> {@link org.apache.flink.table.runtime.operators.aggregate.window.processors.SliceUnsharedWindowAggProcessor}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner}\n         */\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/TumbleWindowTest5.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window;\n\nimport org.apache.flink.api.common.functions.FlatMapFunction;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.tuple.Tuple2;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;\nimport org.apache.flink.streaming.api.windowing.time.Time;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindowTest5 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--table.optimizer.agg-phase-strategy\", \"TWO_PHASE\"});\n\n        String sourceSql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10000',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \")\";\n\n        String sinkSql = \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \")\";\n\n        flinkEnv.streamTEnv().executeSql(sourceSql);\n        flinkEnv.streamTEnv().executeSql(sinkSql);\n\n        String s1 = \"\\tSELECT \\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\tUNIX_TIMESTAMP(CAST(window_start AS STRING)) as rowtime,\\n\"\n                + \"\\t    count(*) as bucket_pv,\\n\"\n                + \"\\t    sum(price) as bucket_sum_price,\\n\"\n                + \"\\t    max(price) as bucket_max_price,\\n\"\n                + \"\\t    min(price) as bucket_min_price,\\n\"\n                + \"\\t    count(distinct user_id) as bucket_uv\\n\"\n                + \"\\tFROM TABLE(TUMBLE(\\n\"\n                + \"\\t\\t\\t\\tTABLE source_table\\n\"\n                + \"\\t\\t\\t\\t, DESCRIPTOR(row_time)\\n\"\n                + \"\\t\\t\\t\\t, INTERVAL '60' SECOND))\\n\"\n                + \"\\tGROUP BY \\n\"\n                + \"\\t\\twindow_start,\\n\"\n                + \"\\t\\twindow_end,\\n\"\n                + \"\\t\\tdim,\\n\"\n                + \"\\t\\tmod(user_id, 1024)\\n\";\n\n        DataStream<Row> r = flinkEnv.streamTEnv()\n                .toRetractStream(flinkEnv.streamTEnv().sqlQuery(s1), Row.class)\n                .flatMap(new FlatMapFunction<Tuple2<Boolean, Row>, Row>() {\n                    @Override\n                    public void flatMap(Tuple2<Boolean, Row> value, Collector<Row> out)\n                            throws Exception {\n                        out.collect(value.f1);\n                    }\n                })\n                .returns(new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(Long.class),\n                        TypeInformation.of(Long.class), TypeInformation.of(Long.class), TypeInformation.of(Long.class),\n                        TypeInformation.of(Long.class), TypeInformation.of(Long.class)));\n\n        DataStream<Row> d = r\n                .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor<Row>(Time.minutes(0)) {\n                    @Override\n                    public long extractTimestamp(Row element) {\n                        return element.getFieldAs(\"f1\");\n                    }\n                });\n\n//        Table t = flinkEnv.streamTEnv().fromDataStream(d, \"dim, rowtime, bucket_pv, bucket_sum_price, bucket_max_price, bucket_min_price, bucket_uv, rowtime.rowtime\");\n\n        flinkEnv.streamTEnv().createTemporaryView(\"tmp\", d, \"dim, bucket_pv, bucket_sum_price, bucket_max_price, bucket_min_price, bucket_uv, rowtime.rowtime\");\n\n        String selectWhereSql = \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   sum(bucket_sum_price) as sum_price,\\n\"\n                + \"\\t   max(bucket_max_price) as max_price,\\n\"\n                + \"\\t   min(bucket_min_price) as min_price,\\n\"\n                + \"\\t   sum(bucket_uv) as uv,\\n\"\n                + \"\\t   UNIX_TIMESTAMP(CAST(window_start AS STRING)) as window_start\\n\"\n                + \"from TABLE(\\n\"\n                + \"\\t TUMBLE(\\n\"\n                + \"\\t\\tTABLE tmp\\n\"\n                + \"\\t\\t, DESCRIPTOR(rowtime)\\n\"\n                + \"\\t\\t, INTERVAL '60' SECOND)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"\\t\\t window_start, window_end\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW 案例\");\n\n\n        flinkEnv.streamTEnv().executeSql(selectWhereSql);\n\n        /**\n         * 两阶段聚合\n         * 本地 agg：{@link org.apache.flink.table.runtime.operators.aggregate.window.LocalSlicingWindowAggOperator}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.LocalAggCombiner}\n         *\n         * key agg；{@link org.apache.flink.table.runtime.operators.window.slicing.SlicingWindowOperator}\n         *    -> {@link org.apache.flink.table.runtime.operators.aggregate.window.processors.SliceUnsharedWindowAggProcessor}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner}\n         */\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/global_agg/GlobalWindowAggsHandler$232.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window.global_agg;\n\n\npublic final class GlobalWindowAggsHandler$232\n        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<Long> {\n\n    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedUnsharedSliceAssigner\n            sliceAssigner$163;\n    long agg0_count1;\n    boolean agg0_count1IsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg2_max;\n    boolean agg2_maxIsNull;\n    long agg3_min;\n    boolean agg3_minIsNull;\n    long agg4_count;\n    boolean agg4_countIsNull;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$164;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$165;\n    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n    org.apache.flink.table.data.GenericRowData acc$167 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData acc$169 = new org.apache.flink.table.data.GenericRowData(6);\n    private org.apache.flink.table.api.dataview.MapView otherMapView$221;\n    private transient org.apache.flink.table.data.conversion.RawObjectConverter converter$222;\n    org.apache.flink.table.data.GenericRowData aggValue$231 = new org.apache.flink.table.data.GenericRowData(7);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    private Long namespace;\n\n    public GlobalWindowAggsHandler$232(Object[] references) throws Exception {\n        sliceAssigner$163 =\n                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedUnsharedSliceAssigner) references[0]));\n        externalSerializer$164 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n        externalSerializer$165 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[2]));\n        converter$222 = (((org.apache.flink.table.data.conversion.RawObjectConverter) references[3]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$164, externalSerializer$165);\n        distinctAcc_0_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n\n        distinct_view_0 = distinctAcc_0_dataview;\n\n        converter$222.open(getRuntimeContext().getUserCodeClassLoader());\n\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        boolean isNull$176;\n        long result$177;\n        long field$178;\n        boolean isNull$178;\n        boolean isNull$179;\n        long result$180;\n        boolean isNull$183;\n        boolean result$184;\n        boolean isNull$188;\n        boolean result$189;\n        long field$193;\n        boolean isNull$193;\n        boolean isNull$195;\n        long result$196;\n        isNull$178 = accInput.isNullAt(2);\n        field$178 = -1L;\n        if (!isNull$178) {\n            field$178 = accInput.getLong(2);\n        }\n        isNull$193 = accInput.isNullAt(3);\n        field$193 = -1L;\n        if (!isNull$193) {\n            field$193 = accInput.getLong(3);\n        }\n\n\n        isNull$176 = agg0_count1IsNull || false;\n        result$177 = -1L;\n        if (!isNull$176) {\n\n            result$177 = (long) (agg0_count1 + ((long) 1L));\n\n        }\n\n        agg0_count1 = result$177;\n        ;\n        agg0_count1IsNull = isNull$176;\n\n\n        long result$182 = -1L;\n        boolean isNull$182;\n        if (isNull$178) {\n\n            isNull$182 = agg1_sumIsNull;\n            if (!isNull$182) {\n                result$182 = agg1_sum;\n            }\n        } else {\n            long result$181 = -1L;\n            boolean isNull$181;\n            if (agg1_sumIsNull) {\n\n                isNull$181 = isNull$178;\n                if (!isNull$181) {\n                    result$181 = field$178;\n                }\n            } else {\n\n\n                isNull$179 = agg1_sumIsNull || isNull$178;\n                result$180 = -1L;\n                if (!isNull$179) {\n\n                    result$180 = (long) (agg1_sum + field$178);\n\n                }\n\n                isNull$181 = isNull$179;\n                if (!isNull$181) {\n                    result$181 = result$180;\n                }\n            }\n            isNull$182 = isNull$181;\n            if (!isNull$182) {\n                result$182 = result$181;\n            }\n        }\n        agg1_sum = result$182;\n        ;\n        agg1_sumIsNull = isNull$182;\n\n\n        long result$187 = -1L;\n        boolean isNull$187;\n        if (isNull$178) {\n\n            isNull$187 = agg2_maxIsNull;\n            if (!isNull$187) {\n                result$187 = agg2_max;\n            }\n        } else {\n            long result$186 = -1L;\n            boolean isNull$186;\n            if (agg2_maxIsNull) {\n\n                isNull$186 = isNull$178;\n                if (!isNull$186) {\n                    result$186 = field$178;\n                }\n            } else {\n                isNull$183 = isNull$178 || agg2_maxIsNull;\n                result$184 = false;\n                if (!isNull$183) {\n\n                    result$184 = field$178 > agg2_max;\n\n                }\n\n                long result$185 = -1L;\n                boolean isNull$185;\n                if (result$184) {\n\n                    isNull$185 = isNull$178;\n                    if (!isNull$185) {\n                        result$185 = field$178;\n                    }\n                } else {\n\n                    isNull$185 = agg2_maxIsNull;\n                    if (!isNull$185) {\n                        result$185 = agg2_max;\n                    }\n                }\n                isNull$186 = isNull$185;\n                if (!isNull$186) {\n                    result$186 = result$185;\n                }\n            }\n            isNull$187 = isNull$186;\n            if (!isNull$187) {\n                result$187 = result$186;\n            }\n        }\n        agg2_max = result$187;\n        ;\n        agg2_maxIsNull = isNull$187;\n\n\n        long result$192 = -1L;\n        boolean isNull$192;\n        if (isNull$178) {\n\n            isNull$192 = agg3_minIsNull;\n            if (!isNull$192) {\n                result$192 = agg3_min;\n            }\n        } else {\n            long result$191 = -1L;\n            boolean isNull$191;\n            if (agg3_minIsNull) {\n\n                isNull$191 = isNull$178;\n                if (!isNull$191) {\n                    result$191 = field$178;\n                }\n            } else {\n                isNull$188 = isNull$178 || agg3_minIsNull;\n                result$189 = false;\n                if (!isNull$188) {\n\n                    result$189 = field$178 < agg3_min;\n\n                }\n\n                long result$190 = -1L;\n                boolean isNull$190;\n                if (result$189) {\n\n                    isNull$190 = isNull$178;\n                    if (!isNull$190) {\n                        result$190 = field$178;\n                    }\n                } else {\n\n                    isNull$190 = agg3_minIsNull;\n                    if (!isNull$190) {\n                        result$190 = agg3_min;\n                    }\n                }\n                isNull$191 = isNull$190;\n                if (!isNull$191) {\n                    result$191 = result$190;\n                }\n            }\n            isNull$192 = isNull$191;\n            if (!isNull$192) {\n                result$192 = result$191;\n            }\n        }\n        agg3_min = result$192;\n        ;\n        agg3_minIsNull = isNull$192;\n\n\n        Long distinctKey$194 = (Long) field$193;\n        if (isNull$193) {\n            distinctKey$194 = null;\n        }\n\n        Long value$198 = (Long) distinct_view_0.get(distinctKey$194);\n        if (value$198 == null) {\n            value$198 = 0L;\n        }\n\n        boolean is_distinct_value_changed_0 = false;\n\n        long existed$199 = ((long) value$198) & (1L << 0);\n        if (existed$199 == 0) {  // not existed\n            value$198 = ((long) value$198) | (1L << 0);\n            is_distinct_value_changed_0 = true;\n\n            long result$197 = -1L;\n            boolean isNull$197;\n            if (isNull$193) {\n\n                isNull$197 = agg4_countIsNull;\n                if (!isNull$197) {\n                    result$197 = agg4_count;\n                }\n            } else {\n\n\n                isNull$195 = agg4_countIsNull || false;\n                result$196 = -1L;\n                if (!isNull$195) {\n\n                    result$196 = (long) (agg4_count + ((long) 1L));\n\n                }\n\n                isNull$197 = isNull$195;\n                if (!isNull$197) {\n                    result$197 = result$196;\n                }\n            }\n            agg4_count = result$197;\n            ;\n            agg4_countIsNull = isNull$197;\n\n        }\n\n        if (is_distinct_value_changed_0) {\n            distinct_view_0.put(distinctKey$194, value$198);\n        }\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(Long ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n        namespace = (Long) ns;\n\n        long field$200;\n        boolean isNull$200;\n        boolean isNull$201;\n        long result$202;\n        long field$203;\n        boolean isNull$203;\n        boolean isNull$204;\n        long result$205;\n        long field$208;\n        boolean isNull$208;\n        boolean isNull$209;\n        boolean result$210;\n        long field$214;\n        boolean isNull$214;\n        boolean isNull$215;\n        boolean result$216;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$220;\n        boolean isNull$220;\n        boolean isNull$226;\n        long result$227;\n        isNull$208 = otherAcc.isNullAt(2);\n        field$208 = -1L;\n        if (!isNull$208) {\n            field$208 = otherAcc.getLong(2);\n        }\n        isNull$203 = otherAcc.isNullAt(1);\n        field$203 = -1L;\n        if (!isNull$203) {\n            field$203 = otherAcc.getLong(1);\n        }\n        isNull$200 = otherAcc.isNullAt(0);\n        field$200 = -1L;\n        if (!isNull$200) {\n            field$200 = otherAcc.getLong(0);\n        }\n        isNull$214 = otherAcc.isNullAt(3);\n        field$214 = -1L;\n        if (!isNull$214) {\n            field$214 = otherAcc.getLong(3);\n        }\n\n        isNull$220 = otherAcc.isNullAt(5);\n        field$220 = null;\n        if (!isNull$220) {\n            field$220 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(5));\n        }\n        otherMapView$221 = null;\n        if (!isNull$220) {\n            otherMapView$221 =\n                    (org.apache.flink.table.api.dataview.MapView) converter$222\n                            .toExternal((org.apache.flink.table.data.binary.BinaryRawValueData) field$220);\n        }\n\n\n        isNull$201 = agg0_count1IsNull || isNull$200;\n        result$202 = -1L;\n        if (!isNull$201) {\n\n            result$202 = (long) (agg0_count1 + field$200);\n\n        }\n\n        agg0_count1 = result$202;\n        ;\n        agg0_count1IsNull = isNull$201;\n\n\n        long result$207 = -1L;\n        boolean isNull$207;\n        if (isNull$203) {\n\n            isNull$207 = agg1_sumIsNull;\n            if (!isNull$207) {\n                result$207 = agg1_sum;\n            }\n        } else {\n            long result$206 = -1L;\n            boolean isNull$206;\n            if (agg1_sumIsNull) {\n\n                isNull$206 = isNull$203;\n                if (!isNull$206) {\n                    result$206 = field$203;\n                }\n            } else {\n\n\n                isNull$204 = agg1_sumIsNull || isNull$203;\n                result$205 = -1L;\n                if (!isNull$204) {\n\n                    result$205 = (long) (agg1_sum + field$203);\n\n                }\n\n                isNull$206 = isNull$204;\n                if (!isNull$206) {\n                    result$206 = result$205;\n                }\n            }\n            isNull$207 = isNull$206;\n            if (!isNull$207) {\n                result$207 = result$206;\n            }\n        }\n        agg1_sum = result$207;\n        ;\n        agg1_sumIsNull = isNull$207;\n\n\n        long result$213 = -1L;\n        boolean isNull$213;\n        if (isNull$208) {\n\n            isNull$213 = agg2_maxIsNull;\n            if (!isNull$213) {\n                result$213 = agg2_max;\n            }\n        } else {\n            long result$212 = -1L;\n            boolean isNull$212;\n            if (agg2_maxIsNull) {\n\n                isNull$212 = isNull$208;\n                if (!isNull$212) {\n                    result$212 = field$208;\n                }\n            } else {\n                isNull$209 = isNull$208 || agg2_maxIsNull;\n                result$210 = false;\n                if (!isNull$209) {\n\n                    result$210 = field$208 > agg2_max;\n\n                }\n\n                long result$211 = -1L;\n                boolean isNull$211;\n                if (result$210) {\n\n                    isNull$211 = isNull$208;\n                    if (!isNull$211) {\n                        result$211 = field$208;\n                    }\n                } else {\n\n                    isNull$211 = agg2_maxIsNull;\n                    if (!isNull$211) {\n                        result$211 = agg2_max;\n                    }\n                }\n                isNull$212 = isNull$211;\n                if (!isNull$212) {\n                    result$212 = result$211;\n                }\n            }\n            isNull$213 = isNull$212;\n            if (!isNull$213) {\n                result$213 = result$212;\n            }\n        }\n        agg2_max = result$213;\n        ;\n        agg2_maxIsNull = isNull$213;\n\n\n        long result$219 = -1L;\n        boolean isNull$219;\n        if (isNull$214) {\n\n            isNull$219 = agg3_minIsNull;\n            if (!isNull$219) {\n                result$219 = agg3_min;\n            }\n        } else {\n            long result$218 = -1L;\n            boolean isNull$218;\n            if (agg3_minIsNull) {\n\n                isNull$218 = isNull$214;\n                if (!isNull$218) {\n                    result$218 = field$214;\n                }\n            } else {\n                isNull$215 = isNull$214 || agg3_minIsNull;\n                result$216 = false;\n                if (!isNull$215) {\n\n                    result$216 = field$214 < agg3_min;\n\n                }\n\n                long result$217 = -1L;\n                boolean isNull$217;\n                if (result$216) {\n\n                    isNull$217 = isNull$214;\n                    if (!isNull$217) {\n                        result$217 = field$214;\n                    }\n                } else {\n\n                    isNull$217 = agg3_minIsNull;\n                    if (!isNull$217) {\n                        result$217 = agg3_min;\n                    }\n                }\n                isNull$218 = isNull$217;\n                if (!isNull$218) {\n                    result$218 = result$217;\n                }\n            }\n            isNull$219 = isNull$218;\n            if (!isNull$219) {\n                result$219 = result$218;\n            }\n        }\n        agg3_min = result$219;\n        ;\n        agg3_minIsNull = isNull$219;\n\n\n        Iterable<java.util.Map.Entry> otherEntries$229 =\n                (Iterable<java.util.Map.Entry>) otherMapView$221.entries();\n        if (otherEntries$229 != null) {\n            for (java.util.Map.Entry entry : otherEntries$229) {\n                Long distinctKey$223 = (Long) entry.getKey();\n                long field$224 = -1L;\n                boolean isNull$225 = true;\n                if (distinctKey$223 != null) {\n                    isNull$225 = false;\n                    field$224 = (long) distinctKey$223;\n                }\n                Long otherValue = (Long) entry.getValue();\n                Long thisValue = (Long) distinct_view_0.get(distinctKey$223);\n                if (thisValue == null) {\n                    thisValue = 0L;\n                }\n                boolean is_distinct_value_changed_0 = false;\n                boolean is_distinct_value_empty_0 = false;\n\n\n                long existed$230 = ((long) thisValue) & (1L << 0);\n                if (existed$230 == 0) {  // not existed\n                    long otherExisted = ((long) otherValue) & (1L << 0);\n                    if (otherExisted != 0) {  // existed in other\n                        is_distinct_value_changed_0 = true;\n                        // do accumulate\n\n                        long result$228 = -1L;\n                        boolean isNull$228;\n                        if (isNull$225) {\n\n                            isNull$228 = agg4_countIsNull;\n                            if (!isNull$228) {\n                                result$228 = agg4_count;\n                            }\n                        } else {\n\n\n                            isNull$226 = agg4_countIsNull || false;\n                            result$227 = -1L;\n                            if (!isNull$226) {\n\n                                result$227 = (long) (agg4_count + ((long) 1L));\n\n                            }\n\n                            isNull$228 = isNull$226;\n                            if (!isNull$228) {\n                                result$228 = result$227;\n                            }\n                        }\n                        agg4_count = result$228;\n                        ;\n                        agg4_countIsNull = isNull$228;\n\n                    }\n                }\n\n                thisValue = ((long) thisValue) | ((long) otherValue);\n                is_distinct_value_empty_0 = false;\n\n                if (is_distinct_value_empty_0) {\n                    distinct_view_0.remove(distinctKey$223);\n                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n                    distinct_view_0.put(distinctKey$223, thisValue);\n                }\n            } // end foreach\n        } // end otherEntries != null\n\n\n    }\n\n    @Override\n    public void setAccumulators(Long ns, org.apache.flink.table.data.RowData acc)\n            throws Exception {\n        namespace = (Long) ns;\n\n        long field$170;\n        boolean isNull$170;\n        long field$171;\n        boolean isNull$171;\n        long field$172;\n        boolean isNull$172;\n        long field$173;\n        boolean isNull$173;\n        long field$174;\n        boolean isNull$174;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$175;\n        boolean isNull$175;\n        isNull$174 = acc.isNullAt(4);\n        field$174 = -1L;\n        if (!isNull$174) {\n            field$174 = acc.getLong(4);\n        }\n        isNull$170 = acc.isNullAt(0);\n        field$170 = -1L;\n        if (!isNull$170) {\n            field$170 = acc.getLong(0);\n        }\n        isNull$171 = acc.isNullAt(1);\n        field$171 = -1L;\n        if (!isNull$171) {\n            field$171 = acc.getLong(1);\n        }\n        isNull$173 = acc.isNullAt(3);\n        field$173 = -1L;\n        if (!isNull$173) {\n            field$173 = acc.getLong(3);\n        }\n\n        // when namespace is null, the dataview is used in heap, no key and namespace set\n        if (namespace != null) {\n            distinctAcc_0_dataview.setCurrentNamespace(namespace);\n            distinct_view_0 = distinctAcc_0_dataview;\n        } else {\n            isNull$175 = acc.isNullAt(5);\n            field$175 = null;\n            if (!isNull$175) {\n                field$175 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n            }\n            distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$175.getJavaObject();\n        }\n\n        isNull$172 = acc.isNullAt(2);\n        field$172 = -1L;\n        if (!isNull$172) {\n            field$172 = acc.getLong(2);\n        }\n\n        agg0_count1 = field$170;\n        ;\n        agg0_count1IsNull = isNull$170;\n\n\n        agg1_sum = field$171;\n        ;\n        agg1_sumIsNull = isNull$171;\n\n\n        agg2_max = field$172;\n        ;\n        agg2_maxIsNull = isNull$172;\n\n\n        agg3_min = field$173;\n        ;\n        agg3_minIsNull = isNull$173;\n\n\n        agg4_count = field$174;\n        ;\n        agg4_countIsNull = isNull$174;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$169 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (agg0_count1IsNull) {\n            acc$169.setField(0, null);\n        } else {\n            acc$169.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$169.setField(1, null);\n        } else {\n            acc$169.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            acc$169.setField(2, null);\n        } else {\n            acc$169.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            acc$169.setField(3, null);\n        } else {\n            acc$169.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            acc$169.setField(4, null);\n        } else {\n            acc$169.setField(4, agg4_count);\n        }\n\n\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$168 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n\n        if (false) {\n            acc$169.setField(5, null);\n        } else {\n            acc$169.setField(5, distinct_acc$168);\n        }\n\n\n        return acc$169;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$167 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (false) {\n            acc$167.setField(0, null);\n        } else {\n            acc$167.setField(0, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$167.setField(1, null);\n        } else {\n            acc$167.setField(1, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$167.setField(2, null);\n        } else {\n            acc$167.setField(2, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$167.setField(3, null);\n        } else {\n            acc$167.setField(3, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$167.setField(4, null);\n        } else {\n            acc$167.setField(4, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.api.dataview.MapView mapview$166 = new org.apache.flink.table.api.dataview.MapView();\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$166 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$166);\n\n        if (false) {\n            acc$167.setField(5, null);\n        } else {\n            acc$167.setField(5, distinct_acc$166);\n        }\n\n\n        return acc$167;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n\n        aggValue$231 = new org.apache.flink.table.data.GenericRowData(7);\n\n\n        if (agg0_count1IsNull) {\n            aggValue$231.setField(0, null);\n        } else {\n            aggValue$231.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            aggValue$231.setField(1, null);\n        } else {\n            aggValue$231.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            aggValue$231.setField(2, null);\n        } else {\n            aggValue$231.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            aggValue$231.setField(3, null);\n        } else {\n            aggValue$231.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            aggValue$231.setField(4, null);\n        } else {\n            aggValue$231.setField(4, agg4_count);\n        }\n\n\n        if (false) {\n            aggValue$231.setField(5, null);\n        } else {\n            aggValue$231.setField(5, org.apache.flink.table.data.TimestampData\n                    .fromEpochMillis(sliceAssigner$163.getWindowStart(namespace)));\n        }\n\n\n        if (false) {\n            aggValue$231.setField(6, null);\n        } else {\n            aggValue$231.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace));\n        }\n\n\n        return aggValue$231;\n\n    }\n\n    @Override\n    public void cleanup(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n        distinctAcc_0_dataview.setCurrentNamespace(namespace);\n        distinctAcc_0_dataview.clear();\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/global_agg/LocalWindowAggsHandler$162.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window.global_agg;\n\n\npublic final class LocalWindowAggsHandler$162\n        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<Long> {\n\n    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedUnsharedSliceAssigner\n            sliceAssigner$95;\n    long agg0_count1;\n    boolean agg0_count1IsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg2_max;\n    boolean agg2_maxIsNull;\n    long agg3_min;\n    boolean agg3_minIsNull;\n    long agg4_count;\n    boolean agg4_countIsNull;\n    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n    org.apache.flink.table.data.GenericRowData acc$97 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData acc$99 = new org.apache.flink.table.data.GenericRowData(6);\n    private org.apache.flink.table.api.dataview.MapView otherMapView$151;\n    private transient org.apache.flink.table.data.conversion.RawObjectConverter converter$152;\n    org.apache.flink.table.data.GenericRowData aggValue$161 = new org.apache.flink.table.data.GenericRowData(7);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    private Long namespace;\n\n    public LocalWindowAggsHandler$162(Object[] references) throws Exception {\n        sliceAssigner$95 =\n                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedUnsharedSliceAssigner) references[0]));\n        converter$152 = (((org.apache.flink.table.data.conversion.RawObjectConverter) references[1]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        converter$152.open(getRuntimeContext().getUserCodeClassLoader());\n\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        boolean isNull$106;\n        long result$107;\n        long field$108;\n        boolean isNull$108;\n        boolean isNull$109;\n        long result$110;\n        boolean isNull$113;\n        boolean result$114;\n        boolean isNull$118;\n        boolean result$119;\n        long field$123;\n        boolean isNull$123;\n        boolean isNull$125;\n        long result$126;\n        isNull$108 = accInput.isNullAt(2);\n        field$108 = -1L;\n        if (!isNull$108) {\n            field$108 = accInput.getLong(2);\n        }\n        isNull$123 = accInput.isNullAt(3);\n        field$123 = -1L;\n        if (!isNull$123) {\n            field$123 = accInput.getLong(3);\n        }\n\n\n        isNull$106 = agg0_count1IsNull || false;\n        result$107 = -1L;\n        if (!isNull$106) {\n\n            result$107 = (long) (agg0_count1 + ((long) 1L));\n\n        }\n\n        agg0_count1 = result$107;\n        ;\n        agg0_count1IsNull = isNull$106;\n\n\n        long result$112 = -1L;\n        boolean isNull$112;\n        if (isNull$108) {\n\n            isNull$112 = agg1_sumIsNull;\n            if (!isNull$112) {\n                result$112 = agg1_sum;\n            }\n        } else {\n            long result$111 = -1L;\n            boolean isNull$111;\n            if (agg1_sumIsNull) {\n\n                isNull$111 = isNull$108;\n                if (!isNull$111) {\n                    result$111 = field$108;\n                }\n            } else {\n\n\n                isNull$109 = agg1_sumIsNull || isNull$108;\n                result$110 = -1L;\n                if (!isNull$109) {\n\n                    result$110 = (long) (agg1_sum + field$108);\n\n                }\n\n                isNull$111 = isNull$109;\n                if (!isNull$111) {\n                    result$111 = result$110;\n                }\n            }\n            isNull$112 = isNull$111;\n            if (!isNull$112) {\n                result$112 = result$111;\n            }\n        }\n        agg1_sum = result$112;\n        ;\n        agg1_sumIsNull = isNull$112;\n\n\n        long result$117 = -1L;\n        boolean isNull$117;\n        if (isNull$108) {\n\n            isNull$117 = agg2_maxIsNull;\n            if (!isNull$117) {\n                result$117 = agg2_max;\n            }\n        } else {\n            long result$116 = -1L;\n            boolean isNull$116;\n            if (agg2_maxIsNull) {\n\n                isNull$116 = isNull$108;\n                if (!isNull$116) {\n                    result$116 = field$108;\n                }\n            } else {\n                isNull$113 = isNull$108 || agg2_maxIsNull;\n                result$114 = false;\n                if (!isNull$113) {\n\n                    result$114 = field$108 > agg2_max;\n\n                }\n\n                long result$115 = -1L;\n                boolean isNull$115;\n                if (result$114) {\n\n                    isNull$115 = isNull$108;\n                    if (!isNull$115) {\n                        result$115 = field$108;\n                    }\n                } else {\n\n                    isNull$115 = agg2_maxIsNull;\n                    if (!isNull$115) {\n                        result$115 = agg2_max;\n                    }\n                }\n                isNull$116 = isNull$115;\n                if (!isNull$116) {\n                    result$116 = result$115;\n                }\n            }\n            isNull$117 = isNull$116;\n            if (!isNull$117) {\n                result$117 = result$116;\n            }\n        }\n        agg2_max = result$117;\n        ;\n        agg2_maxIsNull = isNull$117;\n\n\n        long result$122 = -1L;\n        boolean isNull$122;\n        if (isNull$108) {\n\n            isNull$122 = agg3_minIsNull;\n            if (!isNull$122) {\n                result$122 = agg3_min;\n            }\n        } else {\n            long result$121 = -1L;\n            boolean isNull$121;\n            if (agg3_minIsNull) {\n\n                isNull$121 = isNull$108;\n                if (!isNull$121) {\n                    result$121 = field$108;\n                }\n            } else {\n                isNull$118 = isNull$108 || agg3_minIsNull;\n                result$119 = false;\n                if (!isNull$118) {\n\n                    result$119 = field$108 < agg3_min;\n\n                }\n\n                long result$120 = -1L;\n                boolean isNull$120;\n                if (result$119) {\n\n                    isNull$120 = isNull$108;\n                    if (!isNull$120) {\n                        result$120 = field$108;\n                    }\n                } else {\n\n                    isNull$120 = agg3_minIsNull;\n                    if (!isNull$120) {\n                        result$120 = agg3_min;\n                    }\n                }\n                isNull$121 = isNull$120;\n                if (!isNull$121) {\n                    result$121 = result$120;\n                }\n            }\n            isNull$122 = isNull$121;\n            if (!isNull$122) {\n                result$122 = result$121;\n            }\n        }\n        agg3_min = result$122;\n        ;\n        agg3_minIsNull = isNull$122;\n\n\n        Long distinctKey$124 = (Long) field$123;\n        if (isNull$123) {\n            distinctKey$124 = null;\n        }\n\n        Long value$128 = (Long) distinct_view_0.get(distinctKey$124);\n        if (value$128 == null) {\n            value$128 = 0L;\n        }\n\n        boolean is_distinct_value_changed_0 = false;\n\n        long existed$129 = ((long) value$128) & (1L << 0);\n        if (existed$129 == 0) {  // not existed\n            value$128 = ((long) value$128) | (1L << 0);\n            is_distinct_value_changed_0 = true;\n\n            long result$127 = -1L;\n            boolean isNull$127;\n            if (isNull$123) {\n\n                isNull$127 = agg4_countIsNull;\n                if (!isNull$127) {\n                    result$127 = agg4_count;\n                }\n            } else {\n\n\n                isNull$125 = agg4_countIsNull || false;\n                result$126 = -1L;\n                if (!isNull$125) {\n\n                    result$126 = (long) (agg4_count + ((long) 1L));\n\n                }\n\n                isNull$127 = isNull$125;\n                if (!isNull$127) {\n                    result$127 = result$126;\n                }\n            }\n            agg4_count = result$127;\n            ;\n            agg4_countIsNull = isNull$127;\n\n        }\n\n        if (is_distinct_value_changed_0) {\n            distinct_view_0.put(distinctKey$124, value$128);\n        }\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(Long ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n        namespace = (Long) ns;\n\n        long field$130;\n        boolean isNull$130;\n        boolean isNull$131;\n        long result$132;\n        long field$133;\n        boolean isNull$133;\n        boolean isNull$134;\n        long result$135;\n        long field$138;\n        boolean isNull$138;\n        boolean isNull$139;\n        boolean result$140;\n        long field$144;\n        boolean isNull$144;\n        boolean isNull$145;\n        boolean result$146;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$150;\n        boolean isNull$150;\n        boolean isNull$156;\n        long result$157;\n        isNull$130 = otherAcc.isNullAt(2);\n        field$130 = -1L;\n        if (!isNull$130) {\n            field$130 = otherAcc.getLong(2);\n        }\n        isNull$133 = otherAcc.isNullAt(3);\n        field$133 = -1L;\n        if (!isNull$133) {\n            field$133 = otherAcc.getLong(3);\n        }\n\n        isNull$150 = otherAcc.isNullAt(7);\n        field$150 = null;\n        if (!isNull$150) {\n            field$150 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(7));\n        }\n        otherMapView$151 = null;\n        if (!isNull$150) {\n            otherMapView$151 =\n                    (org.apache.flink.table.api.dataview.MapView) converter$152\n                            .toExternal((org.apache.flink.table.data.binary.BinaryRawValueData) field$150);\n        }\n\n        isNull$144 = otherAcc.isNullAt(5);\n        field$144 = -1L;\n        if (!isNull$144) {\n            field$144 = otherAcc.getLong(5);\n        }\n        isNull$138 = otherAcc.isNullAt(4);\n        field$138 = -1L;\n        if (!isNull$138) {\n            field$138 = otherAcc.getLong(4);\n        }\n\n\n        isNull$131 = agg0_count1IsNull || isNull$130;\n        result$132 = -1L;\n        if (!isNull$131) {\n\n            result$132 = (long) (agg0_count1 + field$130);\n\n        }\n\n        agg0_count1 = result$132;\n        ;\n        agg0_count1IsNull = isNull$131;\n\n\n        long result$137 = -1L;\n        boolean isNull$137;\n        if (isNull$133) {\n\n            isNull$137 = agg1_sumIsNull;\n            if (!isNull$137) {\n                result$137 = agg1_sum;\n            }\n        } else {\n            long result$136 = -1L;\n            boolean isNull$136;\n            if (agg1_sumIsNull) {\n\n                isNull$136 = isNull$133;\n                if (!isNull$136) {\n                    result$136 = field$133;\n                }\n            } else {\n\n\n                isNull$134 = agg1_sumIsNull || isNull$133;\n                result$135 = -1L;\n                if (!isNull$134) {\n\n                    result$135 = (long) (agg1_sum + field$133);\n\n                }\n\n                isNull$136 = isNull$134;\n                if (!isNull$136) {\n                    result$136 = result$135;\n                }\n            }\n            isNull$137 = isNull$136;\n            if (!isNull$137) {\n                result$137 = result$136;\n            }\n        }\n        agg1_sum = result$137;\n        ;\n        agg1_sumIsNull = isNull$137;\n\n\n        long result$143 = -1L;\n        boolean isNull$143;\n        if (isNull$138) {\n\n            isNull$143 = agg2_maxIsNull;\n            if (!isNull$143) {\n                result$143 = agg2_max;\n            }\n        } else {\n            long result$142 = -1L;\n            boolean isNull$142;\n            if (agg2_maxIsNull) {\n\n                isNull$142 = isNull$138;\n                if (!isNull$142) {\n                    result$142 = field$138;\n                }\n            } else {\n                isNull$139 = isNull$138 || agg2_maxIsNull;\n                result$140 = false;\n                if (!isNull$139) {\n\n                    result$140 = field$138 > agg2_max;\n\n                }\n\n                long result$141 = -1L;\n                boolean isNull$141;\n                if (result$140) {\n\n                    isNull$141 = isNull$138;\n                    if (!isNull$141) {\n                        result$141 = field$138;\n                    }\n                } else {\n\n                    isNull$141 = agg2_maxIsNull;\n                    if (!isNull$141) {\n                        result$141 = agg2_max;\n                    }\n                }\n                isNull$142 = isNull$141;\n                if (!isNull$142) {\n                    result$142 = result$141;\n                }\n            }\n            isNull$143 = isNull$142;\n            if (!isNull$143) {\n                result$143 = result$142;\n            }\n        }\n        agg2_max = result$143;\n        ;\n        agg2_maxIsNull = isNull$143;\n\n\n        long result$149 = -1L;\n        boolean isNull$149;\n        if (isNull$144) {\n\n            isNull$149 = agg3_minIsNull;\n            if (!isNull$149) {\n                result$149 = agg3_min;\n            }\n        } else {\n            long result$148 = -1L;\n            boolean isNull$148;\n            if (agg3_minIsNull) {\n\n                isNull$148 = isNull$144;\n                if (!isNull$148) {\n                    result$148 = field$144;\n                }\n            } else {\n                isNull$145 = isNull$144 || agg3_minIsNull;\n                result$146 = false;\n                if (!isNull$145) {\n\n                    result$146 = field$144 < agg3_min;\n\n                }\n\n                long result$147 = -1L;\n                boolean isNull$147;\n                if (result$146) {\n\n                    isNull$147 = isNull$144;\n                    if (!isNull$147) {\n                        result$147 = field$144;\n                    }\n                } else {\n\n                    isNull$147 = agg3_minIsNull;\n                    if (!isNull$147) {\n                        result$147 = agg3_min;\n                    }\n                }\n                isNull$148 = isNull$147;\n                if (!isNull$148) {\n                    result$148 = result$147;\n                }\n            }\n            isNull$149 = isNull$148;\n            if (!isNull$149) {\n                result$149 = result$148;\n            }\n        }\n        agg3_min = result$149;\n        ;\n        agg3_minIsNull = isNull$149;\n\n\n        Iterable<java.util.Map.Entry> otherEntries$159 =\n                (Iterable<java.util.Map.Entry>) otherMapView$151.entries();\n        if (otherEntries$159 != null) {\n            for (java.util.Map.Entry entry : otherEntries$159) {\n                Long distinctKey$153 = (Long) entry.getKey();\n                long field$154 = -1L;\n                boolean isNull$155 = true;\n                if (distinctKey$153 != null) {\n                    isNull$155 = false;\n                    field$154 = (long) distinctKey$153;\n                }\n                Long otherValue = (Long) entry.getValue();\n                Long thisValue = (Long) distinct_view_0.get(distinctKey$153);\n                if (thisValue == null) {\n                    thisValue = 0L;\n                }\n                boolean is_distinct_value_changed_0 = false;\n                boolean is_distinct_value_empty_0 = false;\n\n\n                long existed$160 = ((long) thisValue) & (1L << 0);\n                if (existed$160 == 0) {  // not existed\n                    long otherExisted = ((long) otherValue) & (1L << 0);\n                    if (otherExisted != 0) {  // existed in other\n                        is_distinct_value_changed_0 = true;\n                        // do accumulate\n\n                        long result$158 = -1L;\n                        boolean isNull$158;\n                        if (isNull$155) {\n\n                            isNull$158 = agg4_countIsNull;\n                            if (!isNull$158) {\n                                result$158 = agg4_count;\n                            }\n                        } else {\n\n\n                            isNull$156 = agg4_countIsNull || false;\n                            result$157 = -1L;\n                            if (!isNull$156) {\n\n                                result$157 = (long) (agg4_count + ((long) 1L));\n\n                            }\n\n                            isNull$158 = isNull$156;\n                            if (!isNull$158) {\n                                result$158 = result$157;\n                            }\n                        }\n                        agg4_count = result$158;\n                        ;\n                        agg4_countIsNull = isNull$158;\n\n                    }\n                }\n\n                thisValue = ((long) thisValue) | ((long) otherValue);\n                is_distinct_value_empty_0 = false;\n\n                if (is_distinct_value_empty_0) {\n                    distinct_view_0.remove(distinctKey$153);\n                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n                    distinct_view_0.put(distinctKey$153, thisValue);\n                }\n            } // end foreach\n        } // end otherEntries != null\n\n\n    }\n\n    @Override\n    public void setAccumulators(Long ns, org.apache.flink.table.data.RowData acc)\n            throws Exception {\n        namespace = (Long) ns;\n\n        long field$100;\n        boolean isNull$100;\n        long field$101;\n        boolean isNull$101;\n        long field$102;\n        boolean isNull$102;\n        long field$103;\n        boolean isNull$103;\n        long field$104;\n        boolean isNull$104;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$105;\n        boolean isNull$105;\n        isNull$104 = acc.isNullAt(4);\n        field$104 = -1L;\n        if (!isNull$104) {\n            field$104 = acc.getLong(4);\n        }\n        isNull$100 = acc.isNullAt(0);\n        field$100 = -1L;\n        if (!isNull$100) {\n            field$100 = acc.getLong(0);\n        }\n        isNull$101 = acc.isNullAt(1);\n        field$101 = -1L;\n        if (!isNull$101) {\n            field$101 = acc.getLong(1);\n        }\n        isNull$103 = acc.isNullAt(3);\n        field$103 = -1L;\n        if (!isNull$103) {\n            field$103 = acc.getLong(3);\n        }\n\n        isNull$105 = acc.isNullAt(5);\n        field$105 = null;\n        if (!isNull$105) {\n            field$105 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n        }\n        distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$105.getJavaObject();\n\n        isNull$102 = acc.isNullAt(2);\n        field$102 = -1L;\n        if (!isNull$102) {\n            field$102 = acc.getLong(2);\n        }\n\n        agg0_count1 = field$100;\n        ;\n        agg0_count1IsNull = isNull$100;\n\n\n        agg1_sum = field$101;\n        ;\n        agg1_sumIsNull = isNull$101;\n\n\n        agg2_max = field$102;\n        ;\n        agg2_maxIsNull = isNull$102;\n\n\n        agg3_min = field$103;\n        ;\n        agg3_minIsNull = isNull$103;\n\n\n        agg4_count = field$104;\n        ;\n        agg4_countIsNull = isNull$104;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$99 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (agg0_count1IsNull) {\n            acc$99.setField(0, null);\n        } else {\n            acc$99.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$99.setField(1, null);\n        } else {\n            acc$99.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            acc$99.setField(2, null);\n        } else {\n            acc$99.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            acc$99.setField(3, null);\n        } else {\n            acc$99.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            acc$99.setField(4, null);\n        } else {\n            acc$99.setField(4, agg4_count);\n        }\n\n\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$98 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n\n        if (false) {\n            acc$99.setField(5, null);\n        } else {\n            acc$99.setField(5, distinct_acc$98);\n        }\n\n\n        return acc$99;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$97 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (false) {\n            acc$97.setField(0, null);\n        } else {\n            acc$97.setField(0, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$97.setField(1, null);\n        } else {\n            acc$97.setField(1, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$97.setField(2, null);\n        } else {\n            acc$97.setField(2, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$97.setField(3, null);\n        } else {\n            acc$97.setField(3, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$97.setField(4, null);\n        } else {\n            acc$97.setField(4, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.api.dataview.MapView mapview$96 = new org.apache.flink.table.api.dataview.MapView();\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$96 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$96);\n\n        if (false) {\n            acc$97.setField(5, null);\n        } else {\n            acc$97.setField(5, distinct_acc$96);\n        }\n\n\n        return acc$97;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n\n        aggValue$161 = new org.apache.flink.table.data.GenericRowData(7);\n\n\n        if (agg0_count1IsNull) {\n            aggValue$161.setField(0, null);\n        } else {\n            aggValue$161.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            aggValue$161.setField(1, null);\n        } else {\n            aggValue$161.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            aggValue$161.setField(2, null);\n        } else {\n            aggValue$161.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            aggValue$161.setField(3, null);\n        } else {\n            aggValue$161.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            aggValue$161.setField(4, null);\n        } else {\n            aggValue$161.setField(4, agg4_count);\n        }\n\n\n        if (false) {\n            aggValue$161.setField(5, null);\n        } else {\n            aggValue$161.setField(5, org.apache.flink.table.data.TimestampData\n                    .fromEpochMillis(sliceAssigner$95.getWindowStart(namespace)));\n        }\n\n\n        if (false) {\n            aggValue$161.setField(6, null);\n        } else {\n            aggValue$161.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace));\n        }\n\n\n        return aggValue$161;\n\n    }\n\n    @Override\n    public void cleanup(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/global_agg/StateWindowAggsHandler$300.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window.global_agg;\n\n\npublic final class StateWindowAggsHandler$300\n        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<Long> {\n\n    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedUnsharedSliceAssigner\n            sliceAssigner$233;\n    long agg0_count1;\n    boolean agg0_count1IsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg2_max;\n    boolean agg2_maxIsNull;\n    long agg3_min;\n    boolean agg3_minIsNull;\n    long agg4_count;\n    boolean agg4_countIsNull;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$234;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$235;\n    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview_backup;\n    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_backup_raw_value;\n    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n    private org.apache.flink.table.api.dataview.MapView distinct_backup_view_0;\n    org.apache.flink.table.data.GenericRowData acc$237 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData acc$239 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData aggValue$299 = new org.apache.flink.table.data.GenericRowData(7);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    private Long namespace;\n\n    public StateWindowAggsHandler$300(Object[] references) throws Exception {\n        sliceAssigner$233 =\n                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedUnsharedSliceAssigner) references[0]));\n        externalSerializer$234 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n        externalSerializer$235 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[2]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$234, externalSerializer$235);\n        distinctAcc_0_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n\n\n        distinctAcc_0_dataview_backup = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$234, externalSerializer$235);\n        distinctAcc_0_dataview_backup_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview_backup);\n\n        distinct_view_0 = distinctAcc_0_dataview;\n        distinct_backup_view_0 = distinctAcc_0_dataview_backup;\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        boolean isNull$246;\n        long result$247;\n        long field$248;\n        boolean isNull$248;\n        boolean isNull$249;\n        long result$250;\n        boolean isNull$253;\n        boolean result$254;\n        boolean isNull$258;\n        boolean result$259;\n        long field$263;\n        boolean isNull$263;\n        boolean isNull$265;\n        long result$266;\n        isNull$248 = accInput.isNullAt(2);\n        field$248 = -1L;\n        if (!isNull$248) {\n            field$248 = accInput.getLong(2);\n        }\n        isNull$263 = accInput.isNullAt(3);\n        field$263 = -1L;\n        if (!isNull$263) {\n            field$263 = accInput.getLong(3);\n        }\n\n\n        isNull$246 = agg0_count1IsNull || false;\n        result$247 = -1L;\n        if (!isNull$246) {\n\n            result$247 = (long) (agg0_count1 + ((long) 1L));\n\n        }\n\n        agg0_count1 = result$247;\n        ;\n        agg0_count1IsNull = isNull$246;\n\n\n        long result$252 = -1L;\n        boolean isNull$252;\n        if (isNull$248) {\n\n            isNull$252 = agg1_sumIsNull;\n            if (!isNull$252) {\n                result$252 = agg1_sum;\n            }\n        } else {\n            long result$251 = -1L;\n            boolean isNull$251;\n            if (agg1_sumIsNull) {\n\n                isNull$251 = isNull$248;\n                if (!isNull$251) {\n                    result$251 = field$248;\n                }\n            } else {\n\n\n                isNull$249 = agg1_sumIsNull || isNull$248;\n                result$250 = -1L;\n                if (!isNull$249) {\n\n                    result$250 = (long) (agg1_sum + field$248);\n\n                }\n\n                isNull$251 = isNull$249;\n                if (!isNull$251) {\n                    result$251 = result$250;\n                }\n            }\n            isNull$252 = isNull$251;\n            if (!isNull$252) {\n                result$252 = result$251;\n            }\n        }\n        agg1_sum = result$252;\n        ;\n        agg1_sumIsNull = isNull$252;\n\n\n        long result$257 = -1L;\n        boolean isNull$257;\n        if (isNull$248) {\n\n            isNull$257 = agg2_maxIsNull;\n            if (!isNull$257) {\n                result$257 = agg2_max;\n            }\n        } else {\n            long result$256 = -1L;\n            boolean isNull$256;\n            if (agg2_maxIsNull) {\n\n                isNull$256 = isNull$248;\n                if (!isNull$256) {\n                    result$256 = field$248;\n                }\n            } else {\n                isNull$253 = isNull$248 || agg2_maxIsNull;\n                result$254 = false;\n                if (!isNull$253) {\n\n                    result$254 = field$248 > agg2_max;\n\n                }\n\n                long result$255 = -1L;\n                boolean isNull$255;\n                if (result$254) {\n\n                    isNull$255 = isNull$248;\n                    if (!isNull$255) {\n                        result$255 = field$248;\n                    }\n                } else {\n\n                    isNull$255 = agg2_maxIsNull;\n                    if (!isNull$255) {\n                        result$255 = agg2_max;\n                    }\n                }\n                isNull$256 = isNull$255;\n                if (!isNull$256) {\n                    result$256 = result$255;\n                }\n            }\n            isNull$257 = isNull$256;\n            if (!isNull$257) {\n                result$257 = result$256;\n            }\n        }\n        agg2_max = result$257;\n        ;\n        agg2_maxIsNull = isNull$257;\n\n\n        long result$262 = -1L;\n        boolean isNull$262;\n        if (isNull$248) {\n\n            isNull$262 = agg3_minIsNull;\n            if (!isNull$262) {\n                result$262 = agg3_min;\n            }\n        } else {\n            long result$261 = -1L;\n            boolean isNull$261;\n            if (agg3_minIsNull) {\n\n                isNull$261 = isNull$248;\n                if (!isNull$261) {\n                    result$261 = field$248;\n                }\n            } else {\n                isNull$258 = isNull$248 || agg3_minIsNull;\n                result$259 = false;\n                if (!isNull$258) {\n\n                    result$259 = field$248 < agg3_min;\n\n                }\n\n                long result$260 = -1L;\n                boolean isNull$260;\n                if (result$259) {\n\n                    isNull$260 = isNull$248;\n                    if (!isNull$260) {\n                        result$260 = field$248;\n                    }\n                } else {\n\n                    isNull$260 = agg3_minIsNull;\n                    if (!isNull$260) {\n                        result$260 = agg3_min;\n                    }\n                }\n                isNull$261 = isNull$260;\n                if (!isNull$261) {\n                    result$261 = result$260;\n                }\n            }\n            isNull$262 = isNull$261;\n            if (!isNull$262) {\n                result$262 = result$261;\n            }\n        }\n        agg3_min = result$262;\n        ;\n        agg3_minIsNull = isNull$262;\n\n\n        Long distinctKey$264 = (Long) field$263;\n        if (isNull$263) {\n            distinctKey$264 = null;\n        }\n\n        Long value$268 = (Long) distinct_view_0.get(distinctKey$264);\n        if (value$268 == null) {\n            value$268 = 0L;\n        }\n\n        boolean is_distinct_value_changed_0 = false;\n\n        long existed$269 = ((long) value$268) & (1L << 0);\n        if (existed$269 == 0) {  // not existed\n            value$268 = ((long) value$268) | (1L << 0);\n            is_distinct_value_changed_0 = true;\n\n            long result$267 = -1L;\n            boolean isNull$267;\n            if (isNull$263) {\n\n                isNull$267 = agg4_countIsNull;\n                if (!isNull$267) {\n                    result$267 = agg4_count;\n                }\n            } else {\n\n\n                isNull$265 = agg4_countIsNull || false;\n                result$266 = -1L;\n                if (!isNull$265) {\n\n                    result$266 = (long) (agg4_count + ((long) 1L));\n\n                }\n\n                isNull$267 = isNull$265;\n                if (!isNull$267) {\n                    result$267 = result$266;\n                }\n            }\n            agg4_count = result$267;\n            ;\n            agg4_countIsNull = isNull$267;\n\n        }\n\n        if (is_distinct_value_changed_0) {\n            distinct_view_0.put(distinctKey$264, value$268);\n        }\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(Long ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n        namespace = (Long) ns;\n\n        long field$270;\n        boolean isNull$270;\n        boolean isNull$271;\n        long result$272;\n        long field$273;\n        boolean isNull$273;\n        boolean isNull$274;\n        long result$275;\n        long field$278;\n        boolean isNull$278;\n        boolean isNull$279;\n        boolean result$280;\n        long field$284;\n        boolean isNull$284;\n        boolean isNull$285;\n        boolean result$286;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$290;\n        boolean isNull$290;\n        boolean isNull$294;\n        long result$295;\n        isNull$278 = otherAcc.isNullAt(2);\n        field$278 = -1L;\n        if (!isNull$278) {\n            field$278 = otherAcc.getLong(2);\n        }\n        isNull$273 = otherAcc.isNullAt(1);\n        field$273 = -1L;\n        if (!isNull$273) {\n            field$273 = otherAcc.getLong(1);\n        }\n        isNull$270 = otherAcc.isNullAt(0);\n        field$270 = -1L;\n        if (!isNull$270) {\n            field$270 = otherAcc.getLong(0);\n        }\n        isNull$284 = otherAcc.isNullAt(3);\n        field$284 = -1L;\n        if (!isNull$284) {\n            field$284 = otherAcc.getLong(3);\n        }\n\n        // when namespace is null, the dataview is used in heap, no key and namespace set\n        if (namespace != null) {\n            distinctAcc_0_dataview_backup.setCurrentNamespace(namespace);\n            distinct_backup_view_0 = distinctAcc_0_dataview_backup;\n        } else {\n            isNull$290 = otherAcc.isNullAt(5);\n            field$290 = null;\n            if (!isNull$290) {\n                field$290 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(5));\n            }\n            distinct_backup_view_0 = (org.apache.flink.table.api.dataview.MapView) field$290.getJavaObject();\n        }\n\n\n        isNull$271 = agg0_count1IsNull || isNull$270;\n        result$272 = -1L;\n        if (!isNull$271) {\n\n            result$272 = (long) (agg0_count1 + field$270);\n\n        }\n\n        agg0_count1 = result$272;\n        ;\n        agg0_count1IsNull = isNull$271;\n\n\n        long result$277 = -1L;\n        boolean isNull$277;\n        if (isNull$273) {\n\n            isNull$277 = agg1_sumIsNull;\n            if (!isNull$277) {\n                result$277 = agg1_sum;\n            }\n        } else {\n            long result$276 = -1L;\n            boolean isNull$276;\n            if (agg1_sumIsNull) {\n\n                isNull$276 = isNull$273;\n                if (!isNull$276) {\n                    result$276 = field$273;\n                }\n            } else {\n\n\n                isNull$274 = agg1_sumIsNull || isNull$273;\n                result$275 = -1L;\n                if (!isNull$274) {\n\n                    result$275 = (long) (agg1_sum + field$273);\n\n                }\n\n                isNull$276 = isNull$274;\n                if (!isNull$276) {\n                    result$276 = result$275;\n                }\n            }\n            isNull$277 = isNull$276;\n            if (!isNull$277) {\n                result$277 = result$276;\n            }\n        }\n        agg1_sum = result$277;\n        ;\n        agg1_sumIsNull = isNull$277;\n\n\n        long result$283 = -1L;\n        boolean isNull$283;\n        if (isNull$278) {\n\n            isNull$283 = agg2_maxIsNull;\n            if (!isNull$283) {\n                result$283 = agg2_max;\n            }\n        } else {\n            long result$282 = -1L;\n            boolean isNull$282;\n            if (agg2_maxIsNull) {\n\n                isNull$282 = isNull$278;\n                if (!isNull$282) {\n                    result$282 = field$278;\n                }\n            } else {\n                isNull$279 = isNull$278 || agg2_maxIsNull;\n                result$280 = false;\n                if (!isNull$279) {\n\n                    result$280 = field$278 > agg2_max;\n\n                }\n\n                long result$281 = -1L;\n                boolean isNull$281;\n                if (result$280) {\n\n                    isNull$281 = isNull$278;\n                    if (!isNull$281) {\n                        result$281 = field$278;\n                    }\n                } else {\n\n                    isNull$281 = agg2_maxIsNull;\n                    if (!isNull$281) {\n                        result$281 = agg2_max;\n                    }\n                }\n                isNull$282 = isNull$281;\n                if (!isNull$282) {\n                    result$282 = result$281;\n                }\n            }\n            isNull$283 = isNull$282;\n            if (!isNull$283) {\n                result$283 = result$282;\n            }\n        }\n        agg2_max = result$283;\n        ;\n        agg2_maxIsNull = isNull$283;\n\n\n        long result$289 = -1L;\n        boolean isNull$289;\n        if (isNull$284) {\n\n            isNull$289 = agg3_minIsNull;\n            if (!isNull$289) {\n                result$289 = agg3_min;\n            }\n        } else {\n            long result$288 = -1L;\n            boolean isNull$288;\n            if (agg3_minIsNull) {\n\n                isNull$288 = isNull$284;\n                if (!isNull$288) {\n                    result$288 = field$284;\n                }\n            } else {\n                isNull$285 = isNull$284 || agg3_minIsNull;\n                result$286 = false;\n                if (!isNull$285) {\n\n                    result$286 = field$284 < agg3_min;\n\n                }\n\n                long result$287 = -1L;\n                boolean isNull$287;\n                if (result$286) {\n\n                    isNull$287 = isNull$284;\n                    if (!isNull$287) {\n                        result$287 = field$284;\n                    }\n                } else {\n\n                    isNull$287 = agg3_minIsNull;\n                    if (!isNull$287) {\n                        result$287 = agg3_min;\n                    }\n                }\n                isNull$288 = isNull$287;\n                if (!isNull$288) {\n                    result$288 = result$287;\n                }\n            }\n            isNull$289 = isNull$288;\n            if (!isNull$289) {\n                result$289 = result$288;\n            }\n        }\n        agg3_min = result$289;\n        ;\n        agg3_minIsNull = isNull$289;\n\n\n        Iterable<java.util.Map.Entry> otherEntries$297 =\n                (Iterable<java.util.Map.Entry>) distinct_backup_view_0.entries();\n        if (otherEntries$297 != null) {\n            for (java.util.Map.Entry entry : otherEntries$297) {\n                Long distinctKey$291 = (Long) entry.getKey();\n                long field$292 = -1L;\n                boolean isNull$293 = true;\n                if (distinctKey$291 != null) {\n                    isNull$293 = false;\n                    field$292 = (long) distinctKey$291;\n                }\n                Long otherValue = (Long) entry.getValue();\n                Long thisValue = (Long) distinct_view_0.get(distinctKey$291);\n                if (thisValue == null) {\n                    thisValue = 0L;\n                }\n                boolean is_distinct_value_changed_0 = false;\n                boolean is_distinct_value_empty_0 = false;\n\n\n                long existed$298 = ((long) thisValue) & (1L << 0);\n                if (existed$298 == 0) {  // not existed\n                    long otherExisted = ((long) otherValue) & (1L << 0);\n                    if (otherExisted != 0) {  // existed in other\n                        is_distinct_value_changed_0 = true;\n                        // do accumulate\n\n                        long result$296 = -1L;\n                        boolean isNull$296;\n                        if (isNull$293) {\n\n                            isNull$296 = agg4_countIsNull;\n                            if (!isNull$296) {\n                                result$296 = agg4_count;\n                            }\n                        } else {\n\n\n                            isNull$294 = agg4_countIsNull || false;\n                            result$295 = -1L;\n                            if (!isNull$294) {\n\n                                result$295 = (long) (agg4_count + ((long) 1L));\n\n                            }\n\n                            isNull$296 = isNull$294;\n                            if (!isNull$296) {\n                                result$296 = result$295;\n                            }\n                        }\n                        agg4_count = result$296;\n                        ;\n                        agg4_countIsNull = isNull$296;\n\n                    }\n                }\n\n                thisValue = ((long) thisValue) | ((long) otherValue);\n                is_distinct_value_empty_0 = false;\n\n                if (is_distinct_value_empty_0) {\n                    distinct_view_0.remove(distinctKey$291);\n                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n                    distinct_view_0.put(distinctKey$291, thisValue);\n                }\n            } // end foreach\n        } // end otherEntries != null\n\n\n    }\n\n    @Override\n    public void setAccumulators(Long ns, org.apache.flink.table.data.RowData acc)\n            throws Exception {\n        namespace = (Long) ns;\n\n        long field$240;\n        boolean isNull$240;\n        long field$241;\n        boolean isNull$241;\n        long field$242;\n        boolean isNull$242;\n        long field$243;\n        boolean isNull$243;\n        long field$244;\n        boolean isNull$244;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$245;\n        boolean isNull$245;\n        isNull$244 = acc.isNullAt(4);\n        field$244 = -1L;\n        if (!isNull$244) {\n            field$244 = acc.getLong(4);\n        }\n        isNull$240 = acc.isNullAt(0);\n        field$240 = -1L;\n        if (!isNull$240) {\n            field$240 = acc.getLong(0);\n        }\n        isNull$241 = acc.isNullAt(1);\n        field$241 = -1L;\n        if (!isNull$241) {\n            field$241 = acc.getLong(1);\n        }\n        isNull$243 = acc.isNullAt(3);\n        field$243 = -1L;\n        if (!isNull$243) {\n            field$243 = acc.getLong(3);\n        }\n\n        // when namespace is null, the dataview is used in heap, no key and namespace set\n        if (namespace != null) {\n            distinctAcc_0_dataview.setCurrentNamespace(namespace);\n            distinct_view_0 = distinctAcc_0_dataview;\n        } else {\n            isNull$245 = acc.isNullAt(5);\n            field$245 = null;\n            if (!isNull$245) {\n                field$245 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n            }\n            distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$245.getJavaObject();\n        }\n\n        isNull$242 = acc.isNullAt(2);\n        field$242 = -1L;\n        if (!isNull$242) {\n            field$242 = acc.getLong(2);\n        }\n\n        agg0_count1 = field$240;\n        ;\n        agg0_count1IsNull = isNull$240;\n\n\n        agg1_sum = field$241;\n        ;\n        agg1_sumIsNull = isNull$241;\n\n\n        agg2_max = field$242;\n        ;\n        agg2_maxIsNull = isNull$242;\n\n\n        agg3_min = field$243;\n        ;\n        agg3_minIsNull = isNull$243;\n\n\n        agg4_count = field$244;\n        ;\n        agg4_countIsNull = isNull$244;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$239 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (agg0_count1IsNull) {\n            acc$239.setField(0, null);\n        } else {\n            acc$239.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$239.setField(1, null);\n        } else {\n            acc$239.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            acc$239.setField(2, null);\n        } else {\n            acc$239.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            acc$239.setField(3, null);\n        } else {\n            acc$239.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            acc$239.setField(4, null);\n        } else {\n            acc$239.setField(4, agg4_count);\n        }\n\n\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$238 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n\n        if (false) {\n            acc$239.setField(5, null);\n        } else {\n            acc$239.setField(5, distinct_acc$238);\n        }\n\n\n        return acc$239;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$237 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (false) {\n            acc$237.setField(0, null);\n        } else {\n            acc$237.setField(0, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$237.setField(1, null);\n        } else {\n            acc$237.setField(1, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$237.setField(2, null);\n        } else {\n            acc$237.setField(2, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$237.setField(3, null);\n        } else {\n            acc$237.setField(3, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$237.setField(4, null);\n        } else {\n            acc$237.setField(4, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.api.dataview.MapView mapview$236 = new org.apache.flink.table.api.dataview.MapView();\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$236 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$236);\n\n        if (false) {\n            acc$237.setField(5, null);\n        } else {\n            acc$237.setField(5, distinct_acc$236);\n        }\n\n\n        return acc$237;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n\n        aggValue$299 = new org.apache.flink.table.data.GenericRowData(7);\n\n\n        if (agg0_count1IsNull) {\n            aggValue$299.setField(0, null);\n        } else {\n            aggValue$299.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            aggValue$299.setField(1, null);\n        } else {\n            aggValue$299.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            aggValue$299.setField(2, null);\n        } else {\n            aggValue$299.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            aggValue$299.setField(3, null);\n        } else {\n            aggValue$299.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            aggValue$299.setField(4, null);\n        } else {\n            aggValue$299.setField(4, agg4_count);\n        }\n\n\n        if (false) {\n            aggValue$299.setField(5, null);\n        } else {\n            aggValue$299.setField(5, org.apache.flink.table.data.TimestampData\n                    .fromEpochMillis(sliceAssigner$233.getWindowStart(namespace)));\n        }\n\n\n        if (false) {\n            aggValue$299.setField(6, null);\n        } else {\n            aggValue$299.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace));\n        }\n\n\n        return aggValue$299;\n\n    }\n\n    @Override\n    public void cleanup(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n        distinctAcc_0_dataview.setCurrentNamespace(namespace);\n        distinctAcc_0_dataview.clear();\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/local_agg/KeyProjection$89.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window.local_agg;\n\n\npublic class KeyProjection$89 implements\n        org.apache.flink.table.runtime.generated.Projection<org.apache.flink.table.data.RowData,\n                org.apache.flink.table.data.binary.BinaryRowData> {\n\n    org.apache.flink.table.data.binary.BinaryRowData out = new org.apache.flink.table.data.binary.BinaryRowData(2);\n    org.apache.flink.table.data.writer.BinaryRowWriter outWriter =\n            new org.apache.flink.table.data.writer.BinaryRowWriter(out);\n\n    public KeyProjection$89(Object[] references) throws Exception {\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.binary.BinaryRowData apply(org.apache.flink.table.data.RowData in1) {\n        org.apache.flink.table.data.binary.BinaryStringData field$90;\n        boolean isNull$90;\n        int field$91;\n        boolean isNull$91;\n\n\n        outWriter.reset();\n\n        isNull$90 = in1.isNullAt(0);\n        field$90 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$90) {\n            field$90 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(0));\n        }\n        if (isNull$90) {\n            outWriter.setNullAt(0);\n        } else {\n            outWriter.writeString(0, field$90);\n        }\n\n\n        isNull$91 = in1.isNullAt(1);\n        field$91 = -1;\n        if (!isNull$91) {\n            field$91 = in1.getInt(1);\n        }\n        if (isNull$91) {\n            outWriter.setNullAt(1);\n        } else {\n            outWriter.writeInt(1, field$91);\n        }\n\n        outWriter.complete();\n\n\n        return out;\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_01_tumble_window/local_agg/LocalWindowAggsHandler$88.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._01_tumble_window.local_agg;\n\n\npublic final class LocalWindowAggsHandler$88\n        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<Long> {\n\n    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.TumblingSliceAssigner\n            sliceAssigner$21;\n    long agg0_count1;\n    boolean agg0_count1IsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg2_max;\n    boolean agg2_maxIsNull;\n    long agg3_min;\n    boolean agg3_minIsNull;\n    long agg4_count;\n    boolean agg4_countIsNull;\n    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n    org.apache.flink.table.data.GenericRowData acc$23 = new org.apache.flink.table.data.GenericRowData(6);\n    org.apache.flink.table.data.GenericRowData acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n    private org.apache.flink.table.api.dataview.MapView otherMapView$77;\n    private transient org.apache.flink.table.data.conversion.RawObjectConverter converter$78;\n    org.apache.flink.table.data.GenericRowData aggValue$87 = new org.apache.flink.table.data.GenericRowData(5);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    private Long namespace;\n\n    public LocalWindowAggsHandler$88(Object[] references) throws Exception {\n        sliceAssigner$21 =\n                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.TumblingSliceAssigner) references[0]));\n        converter$78 = (((org.apache.flink.table.data.conversion.RawObjectConverter) references[1]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        converter$78.open(getRuntimeContext().getUserCodeClassLoader());\n\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        boolean isNull$32;\n        long result$33;\n        long field$34;\n        boolean isNull$34;\n        boolean isNull$35;\n        long result$36;\n        boolean isNull$39;\n        boolean result$40;\n        boolean isNull$44;\n        boolean result$45;\n        long field$49;\n        boolean isNull$49;\n        boolean isNull$51;\n        long result$52;\n        isNull$34 = accInput.isNullAt(2);\n        field$34 = -1L;\n        if (!isNull$34) {\n            field$34 = accInput.getLong(2);\n        }\n        isNull$49 = accInput.isNullAt(3);\n        field$49 = -1L;\n        if (!isNull$49) {\n            field$49 = accInput.getLong(3);\n        }\n\n\n        isNull$32 = agg0_count1IsNull || false;\n        result$33 = -1L;\n        if (!isNull$32) {\n\n            result$33 = (long) (agg0_count1 + ((long) 1L));\n\n        }\n\n        agg0_count1 = result$33;\n        ;\n        agg0_count1IsNull = isNull$32;\n\n\n        long result$38 = -1L;\n        boolean isNull$38;\n        if (isNull$34) {\n\n            isNull$38 = agg1_sumIsNull;\n            if (!isNull$38) {\n                result$38 = agg1_sum;\n            }\n        } else {\n            long result$37 = -1L;\n            boolean isNull$37;\n            if (agg1_sumIsNull) {\n\n                isNull$37 = isNull$34;\n                if (!isNull$37) {\n                    result$37 = field$34;\n                }\n            } else {\n\n\n                isNull$35 = agg1_sumIsNull || isNull$34;\n                result$36 = -1L;\n                if (!isNull$35) {\n\n                    result$36 = (long) (agg1_sum + field$34);\n\n                }\n\n                isNull$37 = isNull$35;\n                if (!isNull$37) {\n                    result$37 = result$36;\n                }\n            }\n            isNull$38 = isNull$37;\n            if (!isNull$38) {\n                result$38 = result$37;\n            }\n        }\n        agg1_sum = result$38;\n        ;\n        agg1_sumIsNull = isNull$38;\n\n\n        long result$43 = -1L;\n        boolean isNull$43;\n        if (isNull$34) {\n\n            isNull$43 = agg2_maxIsNull;\n            if (!isNull$43) {\n                result$43 = agg2_max;\n            }\n        } else {\n            long result$42 = -1L;\n            boolean isNull$42;\n            if (agg2_maxIsNull) {\n\n                isNull$42 = isNull$34;\n                if (!isNull$42) {\n                    result$42 = field$34;\n                }\n            } else {\n                isNull$39 = isNull$34 || agg2_maxIsNull;\n                result$40 = false;\n                if (!isNull$39) {\n\n                    result$40 = field$34 > agg2_max;\n\n                }\n\n                long result$41 = -1L;\n                boolean isNull$41;\n                if (result$40) {\n\n                    isNull$41 = isNull$34;\n                    if (!isNull$41) {\n                        result$41 = field$34;\n                    }\n                } else {\n\n                    isNull$41 = agg2_maxIsNull;\n                    if (!isNull$41) {\n                        result$41 = agg2_max;\n                    }\n                }\n                isNull$42 = isNull$41;\n                if (!isNull$42) {\n                    result$42 = result$41;\n                }\n            }\n            isNull$43 = isNull$42;\n            if (!isNull$43) {\n                result$43 = result$42;\n            }\n        }\n        agg2_max = result$43;\n        ;\n        agg2_maxIsNull = isNull$43;\n\n\n        long result$48 = -1L;\n        boolean isNull$48;\n        if (isNull$34) {\n\n            isNull$48 = agg3_minIsNull;\n            if (!isNull$48) {\n                result$48 = agg3_min;\n            }\n        } else {\n            long result$47 = -1L;\n            boolean isNull$47;\n            if (agg3_minIsNull) {\n\n                isNull$47 = isNull$34;\n                if (!isNull$47) {\n                    result$47 = field$34;\n                }\n            } else {\n                isNull$44 = isNull$34 || agg3_minIsNull;\n                result$45 = false;\n                if (!isNull$44) {\n\n                    result$45 = field$34 < agg3_min;\n\n                }\n\n                long result$46 = -1L;\n                boolean isNull$46;\n                if (result$45) {\n\n                    isNull$46 = isNull$34;\n                    if (!isNull$46) {\n                        result$46 = field$34;\n                    }\n                } else {\n\n                    isNull$46 = agg3_minIsNull;\n                    if (!isNull$46) {\n                        result$46 = agg3_min;\n                    }\n                }\n                isNull$47 = isNull$46;\n                if (!isNull$47) {\n                    result$47 = result$46;\n                }\n            }\n            isNull$48 = isNull$47;\n            if (!isNull$48) {\n                result$48 = result$47;\n            }\n        }\n        agg3_min = result$48;\n        ;\n        agg3_minIsNull = isNull$48;\n\n\n        Long distinctKey$50 = (Long) field$49;\n        if (isNull$49) {\n            distinctKey$50 = null;\n        }\n\n        Long value$54 = (Long) distinct_view_0.get(distinctKey$50);\n        if (value$54 == null) {\n            value$54 = 0L;\n        }\n\n        boolean is_distinct_value_changed_0 = false;\n\n        long existed$55 = ((long) value$54) & (1L << 0);\n        if (existed$55 == 0) {  // not existed\n            value$54 = ((long) value$54) | (1L << 0);\n            is_distinct_value_changed_0 = true;\n\n            long result$53 = -1L;\n            boolean isNull$53;\n            if (isNull$49) {\n\n                isNull$53 = agg4_countIsNull;\n                if (!isNull$53) {\n                    result$53 = agg4_count;\n                }\n            } else {\n\n\n                isNull$51 = agg4_countIsNull || false;\n                result$52 = -1L;\n                if (!isNull$51) {\n\n                    result$52 = (long) (agg4_count + ((long) 1L));\n\n                }\n\n                isNull$53 = isNull$51;\n                if (!isNull$53) {\n                    result$53 = result$52;\n                }\n            }\n            agg4_count = result$53;\n            ;\n            agg4_countIsNull = isNull$53;\n\n        }\n\n        if (is_distinct_value_changed_0) {\n            distinct_view_0.put(distinctKey$50, value$54);\n        }\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        throw new RuntimeException(\n                \"This function not require retract method, but the retract method is called.\");\n\n    }\n\n    @Override\n    public void merge(Long ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n        namespace = (Long) ns;\n\n        long field$56;\n        boolean isNull$56;\n        boolean isNull$57;\n        long result$58;\n        long field$59;\n        boolean isNull$59;\n        boolean isNull$60;\n        long result$61;\n        long field$64;\n        boolean isNull$64;\n        boolean isNull$65;\n        boolean result$66;\n        long field$70;\n        boolean isNull$70;\n        boolean isNull$71;\n        boolean result$72;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$76;\n        boolean isNull$76;\n        boolean isNull$82;\n        long result$83;\n        isNull$64 = otherAcc.isNullAt(2);\n        field$64 = -1L;\n        if (!isNull$64) {\n            field$64 = otherAcc.getLong(2);\n        }\n        isNull$59 = otherAcc.isNullAt(1);\n        field$59 = -1L;\n        if (!isNull$59) {\n            field$59 = otherAcc.getLong(1);\n        }\n        isNull$56 = otherAcc.isNullAt(0);\n        field$56 = -1L;\n        if (!isNull$56) {\n            field$56 = otherAcc.getLong(0);\n        }\n        isNull$70 = otherAcc.isNullAt(3);\n        field$70 = -1L;\n        if (!isNull$70) {\n            field$70 = otherAcc.getLong(3);\n        }\n\n        isNull$76 = otherAcc.isNullAt(5);\n        field$76 = null;\n        if (!isNull$76) {\n            field$76 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(5));\n        }\n        otherMapView$77 = null;\n        if (!isNull$76) {\n            otherMapView$77 =\n                    (org.apache.flink.table.api.dataview.MapView) converter$78\n                            .toExternal((org.apache.flink.table.data.binary.BinaryRawValueData) field$76);\n        }\n\n\n        isNull$57 = agg0_count1IsNull || isNull$56;\n        result$58 = -1L;\n        if (!isNull$57) {\n\n            result$58 = (long) (agg0_count1 + field$56);\n\n        }\n\n        agg0_count1 = result$58;\n        ;\n        agg0_count1IsNull = isNull$57;\n\n\n        long result$63 = -1L;\n        boolean isNull$63;\n        if (isNull$59) {\n\n            isNull$63 = agg1_sumIsNull;\n            if (!isNull$63) {\n                result$63 = agg1_sum;\n            }\n        } else {\n            long result$62 = -1L;\n            boolean isNull$62;\n            if (agg1_sumIsNull) {\n\n                isNull$62 = isNull$59;\n                if (!isNull$62) {\n                    result$62 = field$59;\n                }\n            } else {\n\n\n                isNull$60 = agg1_sumIsNull || isNull$59;\n                result$61 = -1L;\n                if (!isNull$60) {\n\n                    result$61 = (long) (agg1_sum + field$59);\n\n                }\n\n                isNull$62 = isNull$60;\n                if (!isNull$62) {\n                    result$62 = result$61;\n                }\n            }\n            isNull$63 = isNull$62;\n            if (!isNull$63) {\n                result$63 = result$62;\n            }\n        }\n        agg1_sum = result$63;\n        ;\n        agg1_sumIsNull = isNull$63;\n\n\n        long result$69 = -1L;\n        boolean isNull$69;\n        if (isNull$64) {\n\n            isNull$69 = agg2_maxIsNull;\n            if (!isNull$69) {\n                result$69 = agg2_max;\n            }\n        } else {\n            long result$68 = -1L;\n            boolean isNull$68;\n            if (agg2_maxIsNull) {\n\n                isNull$68 = isNull$64;\n                if (!isNull$68) {\n                    result$68 = field$64;\n                }\n            } else {\n                isNull$65 = isNull$64 || agg2_maxIsNull;\n                result$66 = false;\n                if (!isNull$65) {\n\n                    result$66 = field$64 > agg2_max;\n\n                }\n\n                long result$67 = -1L;\n                boolean isNull$67;\n                if (result$66) {\n\n                    isNull$67 = isNull$64;\n                    if (!isNull$67) {\n                        result$67 = field$64;\n                    }\n                } else {\n\n                    isNull$67 = agg2_maxIsNull;\n                    if (!isNull$67) {\n                        result$67 = agg2_max;\n                    }\n                }\n                isNull$68 = isNull$67;\n                if (!isNull$68) {\n                    result$68 = result$67;\n                }\n            }\n            isNull$69 = isNull$68;\n            if (!isNull$69) {\n                result$69 = result$68;\n            }\n        }\n        agg2_max = result$69;\n        ;\n        agg2_maxIsNull = isNull$69;\n\n\n        long result$75 = -1L;\n        boolean isNull$75;\n        if (isNull$70) {\n\n            isNull$75 = agg3_minIsNull;\n            if (!isNull$75) {\n                result$75 = agg3_min;\n            }\n        } else {\n            long result$74 = -1L;\n            boolean isNull$74;\n            if (agg3_minIsNull) {\n\n                isNull$74 = isNull$70;\n                if (!isNull$74) {\n                    result$74 = field$70;\n                }\n            } else {\n                isNull$71 = isNull$70 || agg3_minIsNull;\n                result$72 = false;\n                if (!isNull$71) {\n\n                    result$72 = field$70 < agg3_min;\n\n                }\n\n                long result$73 = -1L;\n                boolean isNull$73;\n                if (result$72) {\n\n                    isNull$73 = isNull$70;\n                    if (!isNull$73) {\n                        result$73 = field$70;\n                    }\n                } else {\n\n                    isNull$73 = agg3_minIsNull;\n                    if (!isNull$73) {\n                        result$73 = agg3_min;\n                    }\n                }\n                isNull$74 = isNull$73;\n                if (!isNull$74) {\n                    result$74 = result$73;\n                }\n            }\n            isNull$75 = isNull$74;\n            if (!isNull$75) {\n                result$75 = result$74;\n            }\n        }\n        agg3_min = result$75;\n        ;\n        agg3_minIsNull = isNull$75;\n\n\n        Iterable<java.util.Map.Entry> otherEntries$85 =\n                (Iterable<java.util.Map.Entry>) otherMapView$77.entries();\n        if (otherEntries$85 != null) {\n            for (java.util.Map.Entry entry : otherEntries$85) {\n                Long distinctKey$79 = (Long) entry.getKey();\n                long field$80 = -1L;\n                boolean isNull$81 = true;\n                if (distinctKey$79 != null) {\n                    isNull$81 = false;\n                    field$80 = (long) distinctKey$79;\n                }\n                Long otherValue = (Long) entry.getValue();\n                Long thisValue = (Long) distinct_view_0.get(distinctKey$79);\n                if (thisValue == null) {\n                    thisValue = 0L;\n                }\n                boolean is_distinct_value_changed_0 = false;\n                boolean is_distinct_value_empty_0 = false;\n\n\n                long existed$86 = ((long) thisValue) & (1L << 0);\n                if (existed$86 == 0) {  // not existed\n                    long otherExisted = ((long) otherValue) & (1L << 0);\n                    if (otherExisted != 0) {  // existed in other\n                        is_distinct_value_changed_0 = true;\n                        // do accumulate\n\n                        long result$84 = -1L;\n                        boolean isNull$84;\n                        if (isNull$81) {\n\n                            isNull$84 = agg4_countIsNull;\n                            if (!isNull$84) {\n                                result$84 = agg4_count;\n                            }\n                        } else {\n\n\n                            isNull$82 = agg4_countIsNull || false;\n                            result$83 = -1L;\n                            if (!isNull$82) {\n\n                                result$83 = (long) (agg4_count + ((long) 1L));\n\n                            }\n\n                            isNull$84 = isNull$82;\n                            if (!isNull$84) {\n                                result$84 = result$83;\n                            }\n                        }\n                        agg4_count = result$84;\n                        ;\n                        agg4_countIsNull = isNull$84;\n\n                    }\n                }\n\n                thisValue = ((long) thisValue) | ((long) otherValue);\n                is_distinct_value_empty_0 = false;\n\n                if (is_distinct_value_empty_0) {\n                    distinct_view_0.remove(distinctKey$79);\n                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n                    distinct_view_0.put(distinctKey$79, thisValue);\n                }\n            } // end foreach\n        } // end otherEntries != null\n\n\n    }\n\n    @Override\n    public void setAccumulators(Long ns, org.apache.flink.table.data.RowData acc)\n            throws Exception {\n        namespace = (Long) ns;\n\n        long field$26;\n        boolean isNull$26;\n        long field$27;\n        boolean isNull$27;\n        long field$28;\n        boolean isNull$28;\n        long field$29;\n        boolean isNull$29;\n        long field$30;\n        boolean isNull$30;\n        org.apache.flink.table.data.binary.BinaryRawValueData field$31;\n        boolean isNull$31;\n        isNull$30 = acc.isNullAt(4);\n        field$30 = -1L;\n        if (!isNull$30) {\n            field$30 = acc.getLong(4);\n        }\n        isNull$26 = acc.isNullAt(0);\n        field$26 = -1L;\n        if (!isNull$26) {\n            field$26 = acc.getLong(0);\n        }\n        isNull$27 = acc.isNullAt(1);\n        field$27 = -1L;\n        if (!isNull$27) {\n            field$27 = acc.getLong(1);\n        }\n        isNull$29 = acc.isNullAt(3);\n        field$29 = -1L;\n        if (!isNull$29) {\n            field$29 = acc.getLong(3);\n        }\n\n        isNull$31 = acc.isNullAt(5);\n        field$31 = null;\n        if (!isNull$31) {\n            field$31 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n        }\n        distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$31.getJavaObject();\n\n        isNull$28 = acc.isNullAt(2);\n        field$28 = -1L;\n        if (!isNull$28) {\n            field$28 = acc.getLong(2);\n        }\n\n        agg0_count1 = field$26;\n        ;\n        agg0_count1IsNull = isNull$26;\n\n\n        agg1_sum = field$27;\n        ;\n        agg1_sumIsNull = isNull$27;\n\n\n        agg2_max = field$28;\n        ;\n        agg2_maxIsNull = isNull$28;\n\n\n        agg3_min = field$29;\n        ;\n        agg3_minIsNull = isNull$29;\n\n\n        agg4_count = field$30;\n        ;\n        agg4_countIsNull = isNull$30;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (agg0_count1IsNull) {\n            acc$25.setField(0, null);\n        } else {\n            acc$25.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$25.setField(1, null);\n        } else {\n            acc$25.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            acc$25.setField(2, null);\n        } else {\n            acc$25.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            acc$25.setField(3, null);\n        } else {\n            acc$25.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            acc$25.setField(4, null);\n        } else {\n            acc$25.setField(4, agg4_count);\n        }\n\n\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$24 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n\n        if (false) {\n            acc$25.setField(5, null);\n        } else {\n            acc$25.setField(5, distinct_acc$24);\n        }\n\n\n        return acc$25;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$23 = new org.apache.flink.table.data.GenericRowData(6);\n\n\n        if (false) {\n            acc$23.setField(0, null);\n        } else {\n            acc$23.setField(0, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$23.setField(1, null);\n        } else {\n            acc$23.setField(1, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$23.setField(2, null);\n        } else {\n            acc$23.setField(2, ((long) -1L));\n        }\n\n\n        if (true) {\n            acc$23.setField(3, null);\n        } else {\n            acc$23.setField(3, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$23.setField(4, null);\n        } else {\n            acc$23.setField(4, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.api.dataview.MapView mapview$22 = new org.apache.flink.table.api.dataview.MapView();\n        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$22 =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$22);\n\n        if (false) {\n            acc$23.setField(5, null);\n        } else {\n            acc$23.setField(5, distinct_acc$22);\n        }\n\n\n        return acc$23;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n\n        aggValue$87 = new org.apache.flink.table.data.GenericRowData(5);\n\n\n        if (agg0_count1IsNull) {\n            aggValue$87.setField(0, null);\n        } else {\n            aggValue$87.setField(0, agg0_count1);\n        }\n\n\n        if (agg1_sumIsNull) {\n            aggValue$87.setField(1, null);\n        } else {\n            aggValue$87.setField(1, agg1_sum);\n        }\n\n\n        if (agg2_maxIsNull) {\n            aggValue$87.setField(2, null);\n        } else {\n            aggValue$87.setField(2, agg2_max);\n        }\n\n\n        if (agg3_minIsNull) {\n            aggValue$87.setField(3, null);\n        } else {\n            aggValue$87.setField(3, agg3_min);\n        }\n\n\n        if (agg4_countIsNull) {\n            aggValue$87.setField(4, null);\n        } else {\n            aggValue$87.setField(4, agg4_count);\n        }\n\n\n        return aggValue$87;\n\n    }\n\n    @Override\n    public void cleanup(Long ns) throws Exception {\n        namespace = (Long) ns;\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/CumulateWindowGroupingSetsBigintTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class CumulateWindowGroupingSetsBigintTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    age BIGINT,\\n\"\n                + \"    sex STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.age.min' = '1',\\n\"\n                + \"  'fields.age.max' = '10',\\n\"\n                + \"  'fields.sex.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_end bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select age,\\n\"\n                + \"       sex,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_end) as window_end\\n\"\n                + \"from (\\n\"\n                + \"     SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"            window_start, \\n\"\n                + \"            if (age is null, 'ALL', cast(age as string)) as age,\\n\"\n                + \"            if (sex is null, 'ALL', sex) as sex,\\n\"\n                + \"            count(distinct user_id) as bucket_uv\\n\"\n                + \"     FROM TABLE(CUMULATE(\\n\"\n                + \"               TABLE source_table\\n\"\n                + \"               , DESCRIPTOR(row_time)\\n\"\n                + \"               , INTERVAL '5' SECOND\\n\"\n                + \"               , INTERVAL '1' DAY))\\n\"\n                + \"     GROUP BY window_start, \\n\"\n                + \"              window_end,\\n\"\n                + \"              GROUPING SETS ((), (age), (sex), (age, sex)),\\n\"\n                + \"              mod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"group by age,\\n\"\n                + \"         sex,\\n\"\n                + \"         window_end;\";\n\n        for (String innerSql : sql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/CumulateWindowGroupingSetsTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class CumulateWindowGroupingSetsTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.age.length' = '1',\\n\"\n                + \"  'fields.sex.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_end bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select age,\\n\"\n                + \"       sex,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_end) as window_end\\n\"\n                + \"from (\\n\"\n                + \"     SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"            window_start, \\n\"\n                + \"            if (age is null, 'ALL', age) as age,\\n\"\n                + \"            if (sex is null, 'ALL', sex) as sex,\\n\"\n                + \"            count(distinct user_id) as bucket_uv\\n\"\n                + \"     FROM TABLE(CUMULATE(\\n\"\n                + \"               TABLE source_table\\n\"\n                + \"               , DESCRIPTOR(row_time)\\n\"\n                + \"               , INTERVAL '5' SECOND\\n\"\n                + \"               , INTERVAL '1' DAY))\\n\"\n                + \"     GROUP BY window_start, \\n\"\n                + \"              window_end,\\n\"\n                + \"              GROUPING SETS ((), (age), (sex), (age, sex)),\\n\"\n                + \"              mod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"group by age,\\n\"\n                + \"         sex,\\n\"\n                + \"         window_end;\";\n\n        for (String innerSql : sql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/CumulateWindowTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class CumulateWindowTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1000',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_end bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"       sum(bucket_pv) as pv,\\n\"\n                + \"       sum(bucket_sum_price) as sum_price,\\n\"\n                + \"       max(bucket_max_price) as max_price,\\n\"\n                + \"       min(bucket_min_price) as min_price,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_end) as window_end\\n\"\n                + \"from (\\n\"\n                + \"     SELECT dim,\\n\"\n                + \"            UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"            window_start, \\n\"\n                + \"            count(*) as bucket_pv,\\n\"\n                + \"            sum(price) as bucket_sum_price,\\n\"\n                + \"            max(price) as bucket_max_price,\\n\"\n                + \"            min(price) as bucket_min_price,\\n\"\n                + \"            count(distinct user_id) as bucket_uv\\n\"\n                + \"     FROM TABLE(CUMULATE(\\n\"\n                + \"               TABLE source_table\\n\"\n                + \"               , DESCRIPTOR(row_time)\\n\"\n                + \"               , INTERVAL '60' SECOND\\n\"\n                + \"               , INTERVAL '1' DAY))\\n\"\n                + \"     GROUP BY window_start, \\n\"\n                + \"              window_end,\\n\"\n                + \"              dim,\\n\"\n                + \"              mod(user_id, 1024)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"         window_end;\";\n\n        String exampleSql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp_LTZ(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1000',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end bigint,\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"with tmp as (\\n\"\n                + \"    SELECT \\n\"\n                + \"      window_time as r_time,\\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"    FROM TABLE(CUMULATE(\\n\"\n                + \"             TABLE source_table\\n\"\n                + \"             , DESCRIPTOR(row_time)\\n\"\n                + \"             , INTERVAL '60' SECOND\\n\"\n                + \"             , INTERVAL '1' DAY))\\n\"\n                + \"    GROUP BY window_start, \\n\"\n                + \"            window_end,\\n\"\n                + \"            window_time,\\n\"\n                + \"            mod(id, 1000)\\n\"\n                + \")\\n\"\n                + \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(sum_money) as sum_money,\\n\"\n                + \"      sum(count_distinct_id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE tmp\\n\"\n                + \"         , DESCRIPTOR(r_time)\\n\"\n                + \"         , INTERVAL '60' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        for (String innerSql : exampleSql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/TumbleWindowEarlyFireTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TumbleWindowEarlyFireTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.getStreamTableEnvironment().getConfig().getConfiguration().setString(\"table.exec.emit.early-fire.enabled\", \"true\");\n        flinkEnv.getStreamTableEnvironment().getConfig().getConfiguration().setString(\"table.exec.emit.early-fire.delay\", \"60 s\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    dim BIGINT,\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.dim.min' = '1',\\n\"\n                + \"  'fields.dim.max' = '2',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    dim BIGINT,\\n\"\n                + \"    pv BIGINT,\\n\"\n                + \"    sum_price BIGINT,\\n\"\n                + \"    max_price BIGINT,\\n\"\n                + \"    min_price BIGINT,\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"       sum(bucket_pv) as pv,\\n\"\n                + \"       sum(bucket_sum_price) as sum_price,\\n\"\n                + \"       max(bucket_max_price) as max_price,\\n\"\n                + \"       min(bucket_min_price) as min_price,\\n\"\n                + \"       sum(bucket_uv) as uv,\\n\"\n                + \"       max(window_start) as window_start\\n\"\n                + \"from (\\n\"\n                + \"     select dim,\\n\"\n                + \"            count(*) as bucket_pv,\\n\"\n                + \"            sum(price) as bucket_sum_price,\\n\"\n                + \"            max(price) as bucket_max_price,\\n\"\n                + \"            min(price) as bucket_min_price,\\n\"\n                + \"            count(distinct user_id) as bucket_uv,\\n\"\n                + \"            UNIX_TIMESTAMP(CAST(tumble_start(row_time, interval '1' DAY) AS STRING)) * 1000 as window_start\\n\"\n                + \"     from source_table\\n\"\n                + \"     group by\\n\"\n                + \"            mod(user_id, 1024),\\n\"\n                + \"            dim,\\n\"\n                + \"            tumble(row_time, interval '1' DAY)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"         window_start\";\n\n        flinkEnv.getStreamTableEnvironment().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 WINDOW TVF TUMBLE WINDOW EARLY FIRE 案例\");\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.getStreamTableEnvironment()::executeSql);\n\n\n\n        /**\n         * 两阶段聚合\n         * 本地 agg：{@link org.apache.flink.table.runtime.operators.aggregate.window.LocalSlicingWindowAggOperator}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.LocalAggCombiner}\n         *\n         * key agg；{@link org.apache.flink.table.runtime.operators.window.slicing.SlicingWindowOperator}\n         *    -> {@link org.apache.flink.table.runtime.operators.aggregate.window.processors.SliceUnsharedWindowAggProcessor}\n         *                   -> {@link org.apache.flink.table.runtime.operators.aggregate.window.combines.GlobalAggCombiner}\n         */\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/cumulate/global_agg/GlobalWindowAggsHandler$232.java",
    "content": "//package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.cumulate.global_agg;\n//\n//\n//public final class GlobalWindowAggsHandler$232\n//        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<java.lang.Long> {\n//\n//    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedSharedSliceAssigner\n//            sliceAssigner$163;\n//    long agg0_count1;\n//    boolean agg0_count1IsNull;\n//    long agg1_sum;\n//    boolean agg1_sumIsNull;\n//    long agg2_max;\n//    boolean agg2_maxIsNull;\n//    long agg3_min;\n//    boolean agg3_minIsNull;\n//    long agg4_count;\n//    boolean agg4_countIsNull;\n//    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$164;\n//    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$165;\n//    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n//    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n//    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n//    org.apache.flink.table.data.GenericRowData acc$167 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData acc$169 = new org.apache.flink.table.data.GenericRowData(6);\n//    private org.apache.flink.table.api.dataview.MapView otherMapView$221;\n//    private transient org.apache.flink.table.data.conversion.RawObjectConverter converter$222;\n//    org.apache.flink.table.data.GenericRowData aggValue$231 = new org.apache.flink.table.data.GenericRowData(7);\n//\n//    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n//\n//    private java.lang.Long namespace;\n//\n//    public GlobalWindowAggsHandler$232(Object[] references) throws Exception {\n//        sliceAssigner$163 =\n//                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedSharedSliceAssigner) references[0]));\n//        externalSerializer$164 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n//        externalSerializer$165 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[2]));\n//        converter$222 = (((org.apache.flink.table.data.conversion.RawObjectConverter) references[3]));\n//    }\n//\n//    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n//        return store.getRuntimeContext();\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n//        this.store = store;\n//\n//        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n//                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$164, externalSerializer$165);\n//        distinctAcc_0_dataview_raw_value =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n//\n//        distinct_view_0 = distinctAcc_0_dataview;\n//\n//        converter$222.open(getRuntimeContext().getUserCodeClassLoader());\n//\n//    }\n//\n//    @Override\n//    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n//\n//        boolean isNull$176;\n//        long result$177;\n//        long field$178;\n//        boolean isNull$178;\n//        boolean isNull$179;\n//        long result$180;\n//        boolean isNull$183;\n//        boolean result$184;\n//        boolean isNull$188;\n//        boolean result$189;\n//        long field$193;\n//        boolean isNull$193;\n//        boolean isNull$195;\n//        long result$196;\n//        isNull$178 = accInput.isNullAt(2);\n//        field$178 = -1L;\n//        if (!isNull$178) {\n//            field$178 = accInput.getLong(2);\n//        }\n//        isNull$193 = accInput.isNullAt(3);\n//        field$193 = -1L;\n//        if (!isNull$193) {\n//            field$193 = accInput.getLong(3);\n//        }\n//\n//\n//        isNull$176 = agg0_count1IsNull || false;\n//        result$177 = -1L;\n//        if (!isNull$176) {\n//\n//            result$177 = (long) (agg0_count1 + ((long) 1L));\n//\n//        }\n//\n//        agg0_count1 = result$177;\n//        ;\n//        agg0_count1IsNull = isNull$176;\n//\n//\n//        long result$182 = -1L;\n//        boolean isNull$182;\n//        if (isNull$178) {\n//\n//            isNull$182 = agg1_sumIsNull;\n//            if (!isNull$182) {\n//                result$182 = agg1_sum;\n//            }\n//        } else {\n//            long result$181 = -1L;\n//            boolean isNull$181;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$181 = isNull$178;\n//                if (!isNull$181) {\n//                    result$181 = field$178;\n//                }\n//            } else {\n//\n//\n//                isNull$179 = agg1_sumIsNull || isNull$178;\n//                result$180 = -1L;\n//                if (!isNull$179) {\n//\n//                    result$180 = (long) (agg1_sum + field$178);\n//\n//                }\n//\n//                isNull$181 = isNull$179;\n//                if (!isNull$181) {\n//                    result$181 = result$180;\n//                }\n//            }\n//            isNull$182 = isNull$181;\n//            if (!isNull$182) {\n//                result$182 = result$181;\n//            }\n//        }\n//        agg1_sum = result$182;\n//        ;\n//        agg1_sumIsNull = isNull$182;\n//\n//\n//        long result$187 = -1L;\n//        boolean isNull$187;\n//        if (isNull$178) {\n//\n//            isNull$187 = agg2_maxIsNull;\n//            if (!isNull$187) {\n//                result$187 = agg2_max;\n//            }\n//        } else {\n//            long result$186 = -1L;\n//            boolean isNull$186;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$186 = isNull$178;\n//                if (!isNull$186) {\n//                    result$186 = field$178;\n//                }\n//            } else {\n//                isNull$183 = isNull$178 || agg2_maxIsNull;\n//                result$184 = false;\n//                if (!isNull$183) {\n//\n//                    result$184 = field$178 > agg2_max;\n//\n//                }\n//\n//                long result$185 = -1L;\n//                boolean isNull$185;\n//                if (result$184) {\n//\n//                    isNull$185 = isNull$178;\n//                    if (!isNull$185) {\n//                        result$185 = field$178;\n//                    }\n//                } else {\n//\n//                    isNull$185 = agg2_maxIsNull;\n//                    if (!isNull$185) {\n//                        result$185 = agg2_max;\n//                    }\n//                }\n//                isNull$186 = isNull$185;\n//                if (!isNull$186) {\n//                    result$186 = result$185;\n//                }\n//            }\n//            isNull$187 = isNull$186;\n//            if (!isNull$187) {\n//                result$187 = result$186;\n//            }\n//        }\n//        agg2_max = result$187;\n//        ;\n//        agg2_maxIsNull = isNull$187;\n//\n//\n//        long result$192 = -1L;\n//        boolean isNull$192;\n//        if (isNull$178) {\n//\n//            isNull$192 = agg3_minIsNull;\n//            if (!isNull$192) {\n//                result$192 = agg3_min;\n//            }\n//        } else {\n//            long result$191 = -1L;\n//            boolean isNull$191;\n//            if (agg3_minIsNull) {\n//\n//                isNull$191 = isNull$178;\n//                if (!isNull$191) {\n//                    result$191 = field$178;\n//                }\n//            } else {\n//                isNull$188 = isNull$178 || agg3_minIsNull;\n//                result$189 = false;\n//                if (!isNull$188) {\n//\n//                    result$189 = field$178 < agg3_min;\n//\n//                }\n//\n//                long result$190 = -1L;\n//                boolean isNull$190;\n//                if (result$189) {\n//\n//                    isNull$190 = isNull$178;\n//                    if (!isNull$190) {\n//                        result$190 = field$178;\n//                    }\n//                } else {\n//\n//                    isNull$190 = agg3_minIsNull;\n//                    if (!isNull$190) {\n//                        result$190 = agg3_min;\n//                    }\n//                }\n//                isNull$191 = isNull$190;\n//                if (!isNull$191) {\n//                    result$191 = result$190;\n//                }\n//            }\n//            isNull$192 = isNull$191;\n//            if (!isNull$192) {\n//                result$192 = result$191;\n//            }\n//        }\n//        agg3_min = result$192;\n//        ;\n//        agg3_minIsNull = isNull$192;\n//\n//\n//        java.lang.Long distinctKey$194 = (java.lang.Long) field$193;\n//        if (isNull$193) {\n//            distinctKey$194 = null;\n//        }\n//\n//        java.lang.Long value$198 = (java.lang.Long) distinct_view_0.get(distinctKey$194);\n//        if (value$198 == null) {\n//            value$198 = 0L;\n//        }\n//\n//        boolean is_distinct_value_changed_0 = false;\n//\n//        long existed$199 = ((long) value$198) & (1L << 0);\n//        if (existed$199 == 0) {  // not existed\n//            value$198 = ((long) value$198) | (1L << 0);\n//            is_distinct_value_changed_0 = true;\n//\n//            long result$197 = -1L;\n//            boolean isNull$197;\n//            if (isNull$193) {\n//\n//                isNull$197 = agg4_countIsNull;\n//                if (!isNull$197) {\n//                    result$197 = agg4_count;\n//                }\n//            } else {\n//\n//\n//                isNull$195 = agg4_countIsNull || false;\n//                result$196 = -1L;\n//                if (!isNull$195) {\n//\n//                    result$196 = (long) (agg4_count + ((long) 1L));\n//\n//                }\n//\n//                isNull$197 = isNull$195;\n//                if (!isNull$197) {\n//                    result$197 = result$196;\n//                }\n//            }\n//            agg4_count = result$197;\n//            ;\n//            agg4_countIsNull = isNull$197;\n//\n//        }\n//\n//        if (is_distinct_value_changed_0) {\n//            distinct_view_0.put(distinctKey$194, value$198);\n//        }\n//\n//\n//    }\n//\n//    @Override\n//    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n//\n//        throw new java.lang.RuntimeException(\n//                \"This function not require retract method, but the retract method is called.\");\n//\n//    }\n//\n//    @Override\n//    public void merge(Object ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$200;\n//        boolean isNull$200;\n//        boolean isNull$201;\n//        long result$202;\n//        long field$203;\n//        boolean isNull$203;\n//        boolean isNull$204;\n//        long result$205;\n//        long field$208;\n//        boolean isNull$208;\n//        boolean isNull$209;\n//        boolean result$210;\n//        long field$214;\n//        boolean isNull$214;\n//        boolean isNull$215;\n//        boolean result$216;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$220;\n//        boolean isNull$220;\n//        boolean isNull$226;\n//        long result$227;\n//        isNull$208 = otherAcc.isNullAt(2);\n//        field$208 = -1L;\n//        if (!isNull$208) {\n//            field$208 = otherAcc.getLong(2);\n//        }\n//        isNull$203 = otherAcc.isNullAt(1);\n//        field$203 = -1L;\n//        if (!isNull$203) {\n//            field$203 = otherAcc.getLong(1);\n//        }\n//        isNull$200 = otherAcc.isNullAt(0);\n//        field$200 = -1L;\n//        if (!isNull$200) {\n//            field$200 = otherAcc.getLong(0);\n//        }\n//        isNull$214 = otherAcc.isNullAt(3);\n//        field$214 = -1L;\n//        if (!isNull$214) {\n//            field$214 = otherAcc.getLong(3);\n//        }\n//\n//        isNull$220 = otherAcc.isNullAt(5);\n//        field$220 = null;\n//        if (!isNull$220) {\n//            field$220 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(5));\n//        }\n//        otherMapView$221 = null;\n//        if (!isNull$220) {\n//            otherMapView$221 =\n//                    (org.apache.flink.table.api.dataview.MapView) converter$222\n//                            .toExternal((org.apache.flink.table.data.binary.BinaryRawValueData) field$220);\n//        }\n//\n//\n//        isNull$201 = agg0_count1IsNull || isNull$200;\n//        result$202 = -1L;\n//        if (!isNull$201) {\n//\n//            result$202 = (long) (agg0_count1 + field$200);\n//\n//        }\n//\n//        agg0_count1 = result$202;\n//        ;\n//        agg0_count1IsNull = isNull$201;\n//\n//\n//        long result$207 = -1L;\n//        boolean isNull$207;\n//        if (isNull$203) {\n//\n//            isNull$207 = agg1_sumIsNull;\n//            if (!isNull$207) {\n//                result$207 = agg1_sum;\n//            }\n//        } else {\n//            long result$206 = -1L;\n//            boolean isNull$206;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$206 = isNull$203;\n//                if (!isNull$206) {\n//                    result$206 = field$203;\n//                }\n//            } else {\n//\n//\n//                isNull$204 = agg1_sumIsNull || isNull$203;\n//                result$205 = -1L;\n//                if (!isNull$204) {\n//\n//                    result$205 = (long) (agg1_sum + field$203);\n//\n//                }\n//\n//                isNull$206 = isNull$204;\n//                if (!isNull$206) {\n//                    result$206 = result$205;\n//                }\n//            }\n//            isNull$207 = isNull$206;\n//            if (!isNull$207) {\n//                result$207 = result$206;\n//            }\n//        }\n//        agg1_sum = result$207;\n//        ;\n//        agg1_sumIsNull = isNull$207;\n//\n//\n//        long result$213 = -1L;\n//        boolean isNull$213;\n//        if (isNull$208) {\n//\n//            isNull$213 = agg2_maxIsNull;\n//            if (!isNull$213) {\n//                result$213 = agg2_max;\n//            }\n//        } else {\n//            long result$212 = -1L;\n//            boolean isNull$212;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$212 = isNull$208;\n//                if (!isNull$212) {\n//                    result$212 = field$208;\n//                }\n//            } else {\n//                isNull$209 = isNull$208 || agg2_maxIsNull;\n//                result$210 = false;\n//                if (!isNull$209) {\n//\n//                    result$210 = field$208 > agg2_max;\n//\n//                }\n//\n//                long result$211 = -1L;\n//                boolean isNull$211;\n//                if (result$210) {\n//\n//                    isNull$211 = isNull$208;\n//                    if (!isNull$211) {\n//                        result$211 = field$208;\n//                    }\n//                } else {\n//\n//                    isNull$211 = agg2_maxIsNull;\n//                    if (!isNull$211) {\n//                        result$211 = agg2_max;\n//                    }\n//                }\n//                isNull$212 = isNull$211;\n//                if (!isNull$212) {\n//                    result$212 = result$211;\n//                }\n//            }\n//            isNull$213 = isNull$212;\n//            if (!isNull$213) {\n//                result$213 = result$212;\n//            }\n//        }\n//        agg2_max = result$213;\n//        ;\n//        agg2_maxIsNull = isNull$213;\n//\n//\n//        long result$219 = -1L;\n//        boolean isNull$219;\n//        if (isNull$214) {\n//\n//            isNull$219 = agg3_minIsNull;\n//            if (!isNull$219) {\n//                result$219 = agg3_min;\n//            }\n//        } else {\n//            long result$218 = -1L;\n//            boolean isNull$218;\n//            if (agg3_minIsNull) {\n//\n//                isNull$218 = isNull$214;\n//                if (!isNull$218) {\n//                    result$218 = field$214;\n//                }\n//            } else {\n//                isNull$215 = isNull$214 || agg3_minIsNull;\n//                result$216 = false;\n//                if (!isNull$215) {\n//\n//                    result$216 = field$214 < agg3_min;\n//\n//                }\n//\n//                long result$217 = -1L;\n//                boolean isNull$217;\n//                if (result$216) {\n//\n//                    isNull$217 = isNull$214;\n//                    if (!isNull$217) {\n//                        result$217 = field$214;\n//                    }\n//                } else {\n//\n//                    isNull$217 = agg3_minIsNull;\n//                    if (!isNull$217) {\n//                        result$217 = agg3_min;\n//                    }\n//                }\n//                isNull$218 = isNull$217;\n//                if (!isNull$218) {\n//                    result$218 = result$217;\n//                }\n//            }\n//            isNull$219 = isNull$218;\n//            if (!isNull$219) {\n//                result$219 = result$218;\n//            }\n//        }\n//        agg3_min = result$219;\n//        ;\n//        agg3_minIsNull = isNull$219;\n//\n//\n//        java.lang.Iterable<java.util.Map.Entry> otherEntries$229 =\n//                (java.lang.Iterable<java.util.Map.Entry>) otherMapView$221.entries();\n//        if (otherEntries$229 != null) {\n//            for (java.util.Map.Entry entry : otherEntries$229) {\n//                java.lang.Long distinctKey$223 = (java.lang.Long) entry.getKey();\n//                long field$224 = -1L;\n//                boolean isNull$225 = true;\n//                if (distinctKey$223 != null) {\n//                    isNull$225 = false;\n//                    field$224 = (long) distinctKey$223;\n//                }\n//                java.lang.Long otherValue = (java.lang.Long) entry.getValue();\n//                java.lang.Long thisValue = (java.lang.Long) distinct_view_0.get(distinctKey$223);\n//                if (thisValue == null) {\n//                    thisValue = 0L;\n//                }\n//                boolean is_distinct_value_changed_0 = false;\n//                boolean is_distinct_value_empty_0 = false;\n//\n//\n//                long existed$230 = ((long) thisValue) & (1L << 0);\n//                if (existed$230 == 0) {  // not existed\n//                    long otherExisted = ((long) otherValue) & (1L << 0);\n//                    if (otherExisted != 0) {  // existed in other\n//                        is_distinct_value_changed_0 = true;\n//                        // do accumulate\n//\n//                        long result$228 = -1L;\n//                        boolean isNull$228;\n//                        if (isNull$225) {\n//\n//                            isNull$228 = agg4_countIsNull;\n//                            if (!isNull$228) {\n//                                result$228 = agg4_count;\n//                            }\n//                        } else {\n//\n//\n//                            isNull$226 = agg4_countIsNull || false;\n//                            result$227 = -1L;\n//                            if (!isNull$226) {\n//\n//                                result$227 = (long) (agg4_count + ((long) 1L));\n//\n//                            }\n//\n//                            isNull$228 = isNull$226;\n//                            if (!isNull$228) {\n//                                result$228 = result$227;\n//                            }\n//                        }\n//                        agg4_count = result$228;\n//                        ;\n//                        agg4_countIsNull = isNull$228;\n//\n//                    }\n//                }\n//\n//                thisValue = ((long) thisValue) | ((long) otherValue);\n//                is_distinct_value_empty_0 = false;\n//\n//                if (is_distinct_value_empty_0) {\n//                    distinct_view_0.remove(distinctKey$223);\n//                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n//                    distinct_view_0.put(distinctKey$223, thisValue);\n//                }\n//            } // end foreach\n//        } // end otherEntries != null\n//\n//\n//    }\n//\n//    @Override\n//    public void setAccumulators(Object ns, org.apache.flink.table.data.RowData acc)\n//            throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$170;\n//        boolean isNull$170;\n//        long field$171;\n//        boolean isNull$171;\n//        long field$172;\n//        boolean isNull$172;\n//        long field$173;\n//        boolean isNull$173;\n//        long field$174;\n//        boolean isNull$174;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$175;\n//        boolean isNull$175;\n//        isNull$174 = acc.isNullAt(4);\n//        field$174 = -1L;\n//        if (!isNull$174) {\n//            field$174 = acc.getLong(4);\n//        }\n//        isNull$170 = acc.isNullAt(0);\n//        field$170 = -1L;\n//        if (!isNull$170) {\n//            field$170 = acc.getLong(0);\n//        }\n//        isNull$171 = acc.isNullAt(1);\n//        field$171 = -1L;\n//        if (!isNull$171) {\n//            field$171 = acc.getLong(1);\n//        }\n//        isNull$173 = acc.isNullAt(3);\n//        field$173 = -1L;\n//        if (!isNull$173) {\n//            field$173 = acc.getLong(3);\n//        }\n//\n//        // when namespace is null, the dataview is used in heap, no key and namespace set\n//        if (namespace != null) {\n//            distinctAcc_0_dataview.setCurrentNamespace(namespace);\n//            distinct_view_0 = distinctAcc_0_dataview;\n//        } else {\n//            isNull$175 = acc.isNullAt(5);\n//            field$175 = null;\n//            if (!isNull$175) {\n//                field$175 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n//            }\n//            distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$175.getJavaObject();\n//        }\n//\n//        isNull$172 = acc.isNullAt(2);\n//        field$172 = -1L;\n//        if (!isNull$172) {\n//            field$172 = acc.getLong(2);\n//        }\n//\n//        agg0_count1 = field$170;\n//        ;\n//        agg0_count1IsNull = isNull$170;\n//\n//\n//        agg1_sum = field$171;\n//        ;\n//        agg1_sumIsNull = isNull$171;\n//\n//\n//        agg2_max = field$172;\n//        ;\n//        agg2_maxIsNull = isNull$172;\n//\n//\n//        agg3_min = field$173;\n//        ;\n//        agg3_minIsNull = isNull$173;\n//\n//\n//        agg4_count = field$174;\n//        ;\n//        agg4_countIsNull = isNull$174;\n//\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n//\n//\n//        acc$169 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (agg0_count1IsNull) {\n//            acc$169.setField(0, null);\n//        } else {\n//            acc$169.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            acc$169.setField(1, null);\n//        } else {\n//            acc$169.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            acc$169.setField(2, null);\n//        } else {\n//            acc$169.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            acc$169.setField(3, null);\n//        } else {\n//            acc$169.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            acc$169.setField(4, null);\n//        } else {\n//            acc$169.setField(4, agg4_count);\n//        }\n//\n//\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$168 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n//\n//        if (false) {\n//            acc$169.setField(5, null);\n//        } else {\n//            acc$169.setField(5, distinct_acc$168);\n//        }\n//\n//\n//        return acc$169;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n//\n//\n//        acc$167 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (false) {\n//            acc$167.setField(0, null);\n//        } else {\n//            acc$167.setField(0, ((long) 0L));\n//        }\n//\n//\n//        if (true) {\n//            acc$167.setField(1, null);\n//        } else {\n//            acc$167.setField(1, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$167.setField(2, null);\n//        } else {\n//            acc$167.setField(2, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$167.setField(3, null);\n//        } else {\n//            acc$167.setField(3, ((long) -1L));\n//        }\n//\n//\n//        if (false) {\n//            acc$167.setField(4, null);\n//        } else {\n//            acc$167.setField(4, ((long) 0L));\n//        }\n//\n//\n//        org.apache.flink.table.api.dataview.MapView mapview$166 = new org.apache.flink.table.api.dataview.MapView();\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$166 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$166);\n//\n//        if (false) {\n//            acc$167.setField(5, null);\n//        } else {\n//            acc$167.setField(5, distinct_acc$166);\n//        }\n//\n//\n//        return acc$167;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getValue(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//\n//        aggValue$231 = new org.apache.flink.table.data.GenericRowData(7);\n//\n//\n//        if (agg0_count1IsNull) {\n//            aggValue$231.setField(0, null);\n//        } else {\n//            aggValue$231.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            aggValue$231.setField(1, null);\n//        } else {\n//            aggValue$231.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            aggValue$231.setField(2, null);\n//        } else {\n//            aggValue$231.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            aggValue$231.setField(3, null);\n//        } else {\n//            aggValue$231.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            aggValue$231.setField(4, null);\n//        } else {\n//            aggValue$231.setField(4, agg4_count);\n//        }\n//\n//\n//        if (false) {\n//            aggValue$231.setField(5, null);\n//        } else {\n//            aggValue$231.setField(5, org.apache.flink.table.data.TimestampData\n//                    .fromEpochMillis(sliceAssigner$163.getWindowStart(namespace)));\n//        }\n//\n//\n//        if (false) {\n//            aggValue$231.setField(6, null);\n//        } else {\n//            aggValue$231.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace));\n//        }\n//\n//\n//        return aggValue$231;\n//\n//    }\n//\n//    @Override\n//    public void cleanup(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        distinctAcc_0_dataview.setCurrentNamespace(namespace);\n//        distinctAcc_0_dataview.clear();\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/cumulate/global_agg/KeyProjection$301.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.cumulate.global_agg;\n\n\npublic class KeyProjection$301 implements\n        org.apache.flink.table.runtime.generated.Projection<org.apache.flink.table.data.RowData,\n                org.apache.flink.table.data.binary.BinaryRowData> {\n\n    org.apache.flink.table.data.binary.BinaryRowData out = new org.apache.flink.table.data.binary.BinaryRowData(2);\n    org.apache.flink.table.data.writer.BinaryRowWriter outWriter =\n            new org.apache.flink.table.data.writer.BinaryRowWriter(out);\n\n    public KeyProjection$301(Object[] references) throws Exception {\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.binary.BinaryRowData apply(org.apache.flink.table.data.RowData in1) {\n        org.apache.flink.table.data.binary.BinaryStringData field$302;\n        boolean isNull$302;\n        int field$303;\n        boolean isNull$303;\n\n\n        outWriter.reset();\n\n        isNull$302 = in1.isNullAt(0);\n        field$302 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$302) {\n            field$302 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(0));\n        }\n        if (isNull$302) {\n            outWriter.setNullAt(0);\n        } else {\n            outWriter.writeString(0, field$302);\n        }\n\n\n        isNull$303 = in1.isNullAt(1);\n        field$303 = -1;\n        if (!isNull$303) {\n            field$303 = in1.getInt(1);\n        }\n        if (isNull$303) {\n            outWriter.setNullAt(1);\n        } else {\n            outWriter.writeInt(1, field$303);\n        }\n\n        outWriter.complete();\n\n\n        return out;\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/cumulate/global_agg/LocalWindowAggsHandler$162.java",
    "content": "//package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.cumulate.global_agg;\n//\n//\n//public final class LocalWindowAggsHandler$162\n//        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<java.lang.Long> {\n//\n//    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedSharedSliceAssigner\n//            sliceAssigner$95;\n//    long agg0_count1;\n//    boolean agg0_count1IsNull;\n//    long agg1_sum;\n//    boolean agg1_sumIsNull;\n//    long agg2_max;\n//    boolean agg2_maxIsNull;\n//    long agg3_min;\n//    boolean agg3_minIsNull;\n//    long agg4_count;\n//    boolean agg4_countIsNull;\n//    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n//    org.apache.flink.table.data.GenericRowData acc$97 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData acc$99 = new org.apache.flink.table.data.GenericRowData(6);\n//    private org.apache.flink.table.api.dataview.MapView otherMapView$151;\n//    private transient org.apache.flink.table.data.conversion.RawObjectConverter converter$152;\n//    org.apache.flink.table.data.GenericRowData aggValue$161 = new org.apache.flink.table.data.GenericRowData(7);\n//\n//    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n//\n//    private java.lang.Long namespace;\n//\n//    public LocalWindowAggsHandler$162(Object[] references) throws Exception {\n//        sliceAssigner$95 =\n//                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedSharedSliceAssigner) references[0]));\n//        converter$152 = (((org.apache.flink.table.data.conversion.RawObjectConverter) references[1]));\n//    }\n//\n//    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n//        return store.getRuntimeContext();\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n//        this.store = store;\n//\n//        converter$152.open(getRuntimeContext().getUserCodeClassLoader());\n//\n//    }\n//\n//    @Override\n//    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n//\n//        boolean isNull$106;\n//        long result$107;\n//        long field$108;\n//        boolean isNull$108;\n//        boolean isNull$109;\n//        long result$110;\n//        boolean isNull$113;\n//        boolean result$114;\n//        boolean isNull$118;\n//        boolean result$119;\n//        long field$123;\n//        boolean isNull$123;\n//        boolean isNull$125;\n//        long result$126;\n//        isNull$108 = accInput.isNullAt(2);\n//        field$108 = -1L;\n//        if (!isNull$108) {\n//            field$108 = accInput.getLong(2);\n//        }\n//        isNull$123 = accInput.isNullAt(3);\n//        field$123 = -1L;\n//        if (!isNull$123) {\n//            field$123 = accInput.getLong(3);\n//        }\n//\n//\n//        isNull$106 = agg0_count1IsNull || false;\n//        result$107 = -1L;\n//        if (!isNull$106) {\n//\n//            result$107 = (long) (agg0_count1 + ((long) 1L));\n//\n//        }\n//\n//        agg0_count1 = result$107;\n//        ;\n//        agg0_count1IsNull = isNull$106;\n//\n//\n//        long result$112 = -1L;\n//        boolean isNull$112;\n//        if (isNull$108) {\n//\n//            isNull$112 = agg1_sumIsNull;\n//            if (!isNull$112) {\n//                result$112 = agg1_sum;\n//            }\n//        } else {\n//            long result$111 = -1L;\n//            boolean isNull$111;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$111 = isNull$108;\n//                if (!isNull$111) {\n//                    result$111 = field$108;\n//                }\n//            } else {\n//\n//\n//                isNull$109 = agg1_sumIsNull || isNull$108;\n//                result$110 = -1L;\n//                if (!isNull$109) {\n//\n//                    result$110 = (long) (agg1_sum + field$108);\n//\n//                }\n//\n//                isNull$111 = isNull$109;\n//                if (!isNull$111) {\n//                    result$111 = result$110;\n//                }\n//            }\n//            isNull$112 = isNull$111;\n//            if (!isNull$112) {\n//                result$112 = result$111;\n//            }\n//        }\n//        agg1_sum = result$112;\n//        ;\n//        agg1_sumIsNull = isNull$112;\n//\n//\n//        long result$117 = -1L;\n//        boolean isNull$117;\n//        if (isNull$108) {\n//\n//            isNull$117 = agg2_maxIsNull;\n//            if (!isNull$117) {\n//                result$117 = agg2_max;\n//            }\n//        } else {\n//            long result$116 = -1L;\n//            boolean isNull$116;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$116 = isNull$108;\n//                if (!isNull$116) {\n//                    result$116 = field$108;\n//                }\n//            } else {\n//                isNull$113 = isNull$108 || agg2_maxIsNull;\n//                result$114 = false;\n//                if (!isNull$113) {\n//\n//                    result$114 = field$108 > agg2_max;\n//\n//                }\n//\n//                long result$115 = -1L;\n//                boolean isNull$115;\n//                if (result$114) {\n//\n//                    isNull$115 = isNull$108;\n//                    if (!isNull$115) {\n//                        result$115 = field$108;\n//                    }\n//                } else {\n//\n//                    isNull$115 = agg2_maxIsNull;\n//                    if (!isNull$115) {\n//                        result$115 = agg2_max;\n//                    }\n//                }\n//                isNull$116 = isNull$115;\n//                if (!isNull$116) {\n//                    result$116 = result$115;\n//                }\n//            }\n//            isNull$117 = isNull$116;\n//            if (!isNull$117) {\n//                result$117 = result$116;\n//            }\n//        }\n//        agg2_max = result$117;\n//        ;\n//        agg2_maxIsNull = isNull$117;\n//\n//\n//        long result$122 = -1L;\n//        boolean isNull$122;\n//        if (isNull$108) {\n//\n//            isNull$122 = agg3_minIsNull;\n//            if (!isNull$122) {\n//                result$122 = agg3_min;\n//            }\n//        } else {\n//            long result$121 = -1L;\n//            boolean isNull$121;\n//            if (agg3_minIsNull) {\n//\n//                isNull$121 = isNull$108;\n//                if (!isNull$121) {\n//                    result$121 = field$108;\n//                }\n//            } else {\n//                isNull$118 = isNull$108 || agg3_minIsNull;\n//                result$119 = false;\n//                if (!isNull$118) {\n//\n//                    result$119 = field$108 < agg3_min;\n//\n//                }\n//\n//                long result$120 = -1L;\n//                boolean isNull$120;\n//                if (result$119) {\n//\n//                    isNull$120 = isNull$108;\n//                    if (!isNull$120) {\n//                        result$120 = field$108;\n//                    }\n//                } else {\n//\n//                    isNull$120 = agg3_minIsNull;\n//                    if (!isNull$120) {\n//                        result$120 = agg3_min;\n//                    }\n//                }\n//                isNull$121 = isNull$120;\n//                if (!isNull$121) {\n//                    result$121 = result$120;\n//                }\n//            }\n//            isNull$122 = isNull$121;\n//            if (!isNull$122) {\n//                result$122 = result$121;\n//            }\n//        }\n//        agg3_min = result$122;\n//        ;\n//        agg3_minIsNull = isNull$122;\n//\n//\n//        java.lang.Long distinctKey$124 = (java.lang.Long) field$123;\n//        if (isNull$123) {\n//            distinctKey$124 = null;\n//        }\n//\n//        java.lang.Long value$128 = (java.lang.Long) distinct_view_0.get(distinctKey$124);\n//        if (value$128 == null) {\n//            value$128 = 0L;\n//        }\n//\n//        boolean is_distinct_value_changed_0 = false;\n//\n//        long existed$129 = ((long) value$128) & (1L << 0);\n//        if (existed$129 == 0) {  // not existed\n//            value$128 = ((long) value$128) | (1L << 0);\n//            is_distinct_value_changed_0 = true;\n//\n//            long result$127 = -1L;\n//            boolean isNull$127;\n//            if (isNull$123) {\n//\n//                isNull$127 = agg4_countIsNull;\n//                if (!isNull$127) {\n//                    result$127 = agg4_count;\n//                }\n//            } else {\n//\n//\n//                isNull$125 = agg4_countIsNull || false;\n//                result$126 = -1L;\n//                if (!isNull$125) {\n//\n//                    result$126 = (long) (agg4_count + ((long) 1L));\n//\n//                }\n//\n//                isNull$127 = isNull$125;\n//                if (!isNull$127) {\n//                    result$127 = result$126;\n//                }\n//            }\n//            agg4_count = result$127;\n//            ;\n//            agg4_countIsNull = isNull$127;\n//\n//        }\n//\n//        if (is_distinct_value_changed_0) {\n//            distinct_view_0.put(distinctKey$124, value$128);\n//        }\n//\n//\n//    }\n//\n//    @Override\n//    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n//\n//        throw new java.lang.RuntimeException(\n//                \"This function not require retract method, but the retract method is called.\");\n//\n//    }\n//\n//    @Override\n//    public void merge(Object ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$130;\n//        boolean isNull$130;\n//        boolean isNull$131;\n//        long result$132;\n//        long field$133;\n//        boolean isNull$133;\n//        boolean isNull$134;\n//        long result$135;\n//        long field$138;\n//        boolean isNull$138;\n//        boolean isNull$139;\n//        boolean result$140;\n//        long field$144;\n//        boolean isNull$144;\n//        boolean isNull$145;\n//        boolean result$146;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$150;\n//        boolean isNull$150;\n//        boolean isNull$156;\n//        long result$157;\n//        isNull$130 = otherAcc.isNullAt(2);\n//        field$130 = -1L;\n//        if (!isNull$130) {\n//            field$130 = otherAcc.getLong(2);\n//        }\n//        isNull$133 = otherAcc.isNullAt(3);\n//        field$133 = -1L;\n//        if (!isNull$133) {\n//            field$133 = otherAcc.getLong(3);\n//        }\n//\n//        isNull$150 = otherAcc.isNullAt(7);\n//        field$150 = null;\n//        if (!isNull$150) {\n//            field$150 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(7));\n//        }\n//        otherMapView$151 = null;\n//        if (!isNull$150) {\n//            otherMapView$151 =\n//                    (org.apache.flink.table.api.dataview.MapView) converter$152\n//                            .toExternal((org.apache.flink.table.data.binary.BinaryRawValueData) field$150);\n//        }\n//\n//        isNull$144 = otherAcc.isNullAt(5);\n//        field$144 = -1L;\n//        if (!isNull$144) {\n//            field$144 = otherAcc.getLong(5);\n//        }\n//        isNull$138 = otherAcc.isNullAt(4);\n//        field$138 = -1L;\n//        if (!isNull$138) {\n//            field$138 = otherAcc.getLong(4);\n//        }\n//\n//\n//        isNull$131 = agg0_count1IsNull || isNull$130;\n//        result$132 = -1L;\n//        if (!isNull$131) {\n//\n//            result$132 = (long) (agg0_count1 + field$130);\n//\n//        }\n//\n//        agg0_count1 = result$132;\n//        ;\n//        agg0_count1IsNull = isNull$131;\n//\n//\n//        long result$137 = -1L;\n//        boolean isNull$137;\n//        if (isNull$133) {\n//\n//            isNull$137 = agg1_sumIsNull;\n//            if (!isNull$137) {\n//                result$137 = agg1_sum;\n//            }\n//        } else {\n//            long result$136 = -1L;\n//            boolean isNull$136;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$136 = isNull$133;\n//                if (!isNull$136) {\n//                    result$136 = field$133;\n//                }\n//            } else {\n//\n//\n//                isNull$134 = agg1_sumIsNull || isNull$133;\n//                result$135 = -1L;\n//                if (!isNull$134) {\n//\n//                    result$135 = (long) (agg1_sum + field$133);\n//\n//                }\n//\n//                isNull$136 = isNull$134;\n//                if (!isNull$136) {\n//                    result$136 = result$135;\n//                }\n//            }\n//            isNull$137 = isNull$136;\n//            if (!isNull$137) {\n//                result$137 = result$136;\n//            }\n//        }\n//        agg1_sum = result$137;\n//        ;\n//        agg1_sumIsNull = isNull$137;\n//\n//\n//        long result$143 = -1L;\n//        boolean isNull$143;\n//        if (isNull$138) {\n//\n//            isNull$143 = agg2_maxIsNull;\n//            if (!isNull$143) {\n//                result$143 = agg2_max;\n//            }\n//        } else {\n//            long result$142 = -1L;\n//            boolean isNull$142;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$142 = isNull$138;\n//                if (!isNull$142) {\n//                    result$142 = field$138;\n//                }\n//            } else {\n//                isNull$139 = isNull$138 || agg2_maxIsNull;\n//                result$140 = false;\n//                if (!isNull$139) {\n//\n//                    result$140 = field$138 > agg2_max;\n//\n//                }\n//\n//                long result$141 = -1L;\n//                boolean isNull$141;\n//                if (result$140) {\n//\n//                    isNull$141 = isNull$138;\n//                    if (!isNull$141) {\n//                        result$141 = field$138;\n//                    }\n//                } else {\n//\n//                    isNull$141 = agg2_maxIsNull;\n//                    if (!isNull$141) {\n//                        result$141 = agg2_max;\n//                    }\n//                }\n//                isNull$142 = isNull$141;\n//                if (!isNull$142) {\n//                    result$142 = result$141;\n//                }\n//            }\n//            isNull$143 = isNull$142;\n//            if (!isNull$143) {\n//                result$143 = result$142;\n//            }\n//        }\n//        agg2_max = result$143;\n//        ;\n//        agg2_maxIsNull = isNull$143;\n//\n//\n//        long result$149 = -1L;\n//        boolean isNull$149;\n//        if (isNull$144) {\n//\n//            isNull$149 = agg3_minIsNull;\n//            if (!isNull$149) {\n//                result$149 = agg3_min;\n//            }\n//        } else {\n//            long result$148 = -1L;\n//            boolean isNull$148;\n//            if (agg3_minIsNull) {\n//\n//                isNull$148 = isNull$144;\n//                if (!isNull$148) {\n//                    result$148 = field$144;\n//                }\n//            } else {\n//                isNull$145 = isNull$144 || agg3_minIsNull;\n//                result$146 = false;\n//                if (!isNull$145) {\n//\n//                    result$146 = field$144 < agg3_min;\n//\n//                }\n//\n//                long result$147 = -1L;\n//                boolean isNull$147;\n//                if (result$146) {\n//\n//                    isNull$147 = isNull$144;\n//                    if (!isNull$147) {\n//                        result$147 = field$144;\n//                    }\n//                } else {\n//\n//                    isNull$147 = agg3_minIsNull;\n//                    if (!isNull$147) {\n//                        result$147 = agg3_min;\n//                    }\n//                }\n//                isNull$148 = isNull$147;\n//                if (!isNull$148) {\n//                    result$148 = result$147;\n//                }\n//            }\n//            isNull$149 = isNull$148;\n//            if (!isNull$149) {\n//                result$149 = result$148;\n//            }\n//        }\n//        agg3_min = result$149;\n//        ;\n//        agg3_minIsNull = isNull$149;\n//\n//\n//        java.lang.Iterable<java.util.Map.Entry> otherEntries$159 =\n//                (java.lang.Iterable<java.util.Map.Entry>) otherMapView$151.entries();\n//        if (otherEntries$159 != null) {\n//            for (java.util.Map.Entry entry : otherEntries$159) {\n//                java.lang.Long distinctKey$153 = (java.lang.Long) entry.getKey();\n//                long field$154 = -1L;\n//                boolean isNull$155 = true;\n//                if (distinctKey$153 != null) {\n//                    isNull$155 = false;\n//                    field$154 = (long) distinctKey$153;\n//                }\n//                java.lang.Long otherValue = (java.lang.Long) entry.getValue();\n//                java.lang.Long thisValue = (java.lang.Long) distinct_view_0.get(distinctKey$153);\n//                if (thisValue == null) {\n//                    thisValue = 0L;\n//                }\n//                boolean is_distinct_value_changed_0 = false;\n//                boolean is_distinct_value_empty_0 = false;\n//\n//\n//                long existed$160 = ((long) thisValue) & (1L << 0);\n//                if (existed$160 == 0) {  // not existed\n//                    long otherExisted = ((long) otherValue) & (1L << 0);\n//                    if (otherExisted != 0) {  // existed in other\n//                        is_distinct_value_changed_0 = true;\n//                        // do accumulate\n//\n//                        long result$158 = -1L;\n//                        boolean isNull$158;\n//                        if (isNull$155) {\n//\n//                            isNull$158 = agg4_countIsNull;\n//                            if (!isNull$158) {\n//                                result$158 = agg4_count;\n//                            }\n//                        } else {\n//\n//\n//                            isNull$156 = agg4_countIsNull || false;\n//                            result$157 = -1L;\n//                            if (!isNull$156) {\n//\n//                                result$157 = (long) (agg4_count + ((long) 1L));\n//\n//                            }\n//\n//                            isNull$158 = isNull$156;\n//                            if (!isNull$158) {\n//                                result$158 = result$157;\n//                            }\n//                        }\n//                        agg4_count = result$158;\n//                        ;\n//                        agg4_countIsNull = isNull$158;\n//\n//                    }\n//                }\n//\n//                thisValue = ((long) thisValue) | ((long) otherValue);\n//                is_distinct_value_empty_0 = false;\n//\n//                if (is_distinct_value_empty_0) {\n//                    distinct_view_0.remove(distinctKey$153);\n//                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n//                    distinct_view_0.put(distinctKey$153, thisValue);\n//                }\n//            } // end foreach\n//        } // end otherEntries != null\n//\n//\n//    }\n//\n//    @Override\n//    public void setAccumulators(Object ns, org.apache.flink.table.data.RowData acc)\n//            throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$100;\n//        boolean isNull$100;\n//        long field$101;\n//        boolean isNull$101;\n//        long field$102;\n//        boolean isNull$102;\n//        long field$103;\n//        boolean isNull$103;\n//        long field$104;\n//        boolean isNull$104;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$105;\n//        boolean isNull$105;\n//        isNull$104 = acc.isNullAt(4);\n//        field$104 = -1L;\n//        if (!isNull$104) {\n//            field$104 = acc.getLong(4);\n//        }\n//        isNull$100 = acc.isNullAt(0);\n//        field$100 = -1L;\n//        if (!isNull$100) {\n//            field$100 = acc.getLong(0);\n//        }\n//        isNull$101 = acc.isNullAt(1);\n//        field$101 = -1L;\n//        if (!isNull$101) {\n//            field$101 = acc.getLong(1);\n//        }\n//        isNull$103 = acc.isNullAt(3);\n//        field$103 = -1L;\n//        if (!isNull$103) {\n//            field$103 = acc.getLong(3);\n//        }\n//\n//        isNull$105 = acc.isNullAt(5);\n//        field$105 = null;\n//        if (!isNull$105) {\n//            field$105 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n//        }\n//        distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$105.getJavaObject();\n//\n//        isNull$102 = acc.isNullAt(2);\n//        field$102 = -1L;\n//        if (!isNull$102) {\n//            field$102 = acc.getLong(2);\n//        }\n//\n//        agg0_count1 = field$100;\n//        ;\n//        agg0_count1IsNull = isNull$100;\n//\n//\n//        agg1_sum = field$101;\n//        ;\n//        agg1_sumIsNull = isNull$101;\n//\n//\n//        agg2_max = field$102;\n//        ;\n//        agg2_maxIsNull = isNull$102;\n//\n//\n//        agg3_min = field$103;\n//        ;\n//        agg3_minIsNull = isNull$103;\n//\n//\n//        agg4_count = field$104;\n//        ;\n//        agg4_countIsNull = isNull$104;\n//\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n//\n//\n//        acc$99 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (agg0_count1IsNull) {\n//            acc$99.setField(0, null);\n//        } else {\n//            acc$99.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            acc$99.setField(1, null);\n//        } else {\n//            acc$99.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            acc$99.setField(2, null);\n//        } else {\n//            acc$99.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            acc$99.setField(3, null);\n//        } else {\n//            acc$99.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            acc$99.setField(4, null);\n//        } else {\n//            acc$99.setField(4, agg4_count);\n//        }\n//\n//\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$98 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n//\n//        if (false) {\n//            acc$99.setField(5, null);\n//        } else {\n//            acc$99.setField(5, distinct_acc$98);\n//        }\n//\n//\n//        return acc$99;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n//\n//\n//        acc$97 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (false) {\n//            acc$97.setField(0, null);\n//        } else {\n//            acc$97.setField(0, ((long) 0L));\n//        }\n//\n//\n//        if (true) {\n//            acc$97.setField(1, null);\n//        } else {\n//            acc$97.setField(1, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$97.setField(2, null);\n//        } else {\n//            acc$97.setField(2, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$97.setField(3, null);\n//        } else {\n//            acc$97.setField(3, ((long) -1L));\n//        }\n//\n//\n//        if (false) {\n//            acc$97.setField(4, null);\n//        } else {\n//            acc$97.setField(4, ((long) 0L));\n//        }\n//\n//\n//        org.apache.flink.table.api.dataview.MapView mapview$96 = new org.apache.flink.table.api.dataview.MapView();\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$96 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$96);\n//\n//        if (false) {\n//            acc$97.setField(5, null);\n//        } else {\n//            acc$97.setField(5, distinct_acc$96);\n//        }\n//\n//\n//        return acc$97;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getValue(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//\n//        aggValue$161 = new org.apache.flink.table.data.GenericRowData(7);\n//\n//\n//        if (agg0_count1IsNull) {\n//            aggValue$161.setField(0, null);\n//        } else {\n//            aggValue$161.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            aggValue$161.setField(1, null);\n//        } else {\n//            aggValue$161.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            aggValue$161.setField(2, null);\n//        } else {\n//            aggValue$161.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            aggValue$161.setField(3, null);\n//        } else {\n//            aggValue$161.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            aggValue$161.setField(4, null);\n//        } else {\n//            aggValue$161.setField(4, agg4_count);\n//        }\n//\n//\n//        if (false) {\n//            aggValue$161.setField(5, null);\n//        } else {\n//            aggValue$161.setField(5, org.apache.flink.table.data.TimestampData\n//                    .fromEpochMillis(sliceAssigner$95.getWindowStart(namespace)));\n//        }\n//\n//\n//        if (false) {\n//            aggValue$161.setField(6, null);\n//        } else {\n//            aggValue$161.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace));\n//        }\n//\n//\n//        return aggValue$161;\n//\n//    }\n//\n//    @Override\n//    public void cleanup(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/cumulate/global_agg/StateWindowAggsHandler$300.java",
    "content": "//package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.cumulate.global_agg;\n//\n//\n//public final class StateWindowAggsHandler$300\n//        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<java.lang.Long> {\n//\n//    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedSharedSliceAssigner\n//            sliceAssigner$233;\n//    long agg0_count1;\n//    boolean agg0_count1IsNull;\n//    long agg1_sum;\n//    boolean agg1_sumIsNull;\n//    long agg2_max;\n//    boolean agg2_maxIsNull;\n//    long agg3_min;\n//    boolean agg3_minIsNull;\n//    long agg4_count;\n//    boolean agg4_countIsNull;\n//    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$234;\n//    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$235;\n//    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n//    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n//    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview_backup;\n//    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_backup_raw_value;\n//    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n//    private org.apache.flink.table.api.dataview.MapView distinct_backup_view_0;\n//    org.apache.flink.table.data.GenericRowData acc$237 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData acc$239 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData aggValue$299 = new org.apache.flink.table.data.GenericRowData(7);\n//\n//    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n//\n//    private java.lang.Long namespace;\n//\n//    public StateWindowAggsHandler$300(Object[] references) throws Exception {\n//        sliceAssigner$233 =\n//                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.SlicedSharedSliceAssigner) references[0]));\n//        externalSerializer$234 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n//        externalSerializer$235 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[2]));\n//    }\n//\n//    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n//        return store.getRuntimeContext();\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n//        this.store = store;\n//\n//        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n//                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$234, externalSerializer$235);\n//        distinctAcc_0_dataview_raw_value =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n//\n//\n//        distinctAcc_0_dataview_backup = (org.apache.flink.table.runtime.dataview.StateMapView) store\n//                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$234, externalSerializer$235);\n//        distinctAcc_0_dataview_backup_raw_value =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview_backup);\n//\n//        distinct_view_0 = distinctAcc_0_dataview;\n//        distinct_backup_view_0 = distinctAcc_0_dataview_backup;\n//    }\n//\n//    @Override\n//    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n//\n//        boolean isNull$246;\n//        long result$247;\n//        long field$248;\n//        boolean isNull$248;\n//        boolean isNull$249;\n//        long result$250;\n//        boolean isNull$253;\n//        boolean result$254;\n//        boolean isNull$258;\n//        boolean result$259;\n//        long field$263;\n//        boolean isNull$263;\n//        boolean isNull$265;\n//        long result$266;\n//        isNull$248 = accInput.isNullAt(2);\n//        field$248 = -1L;\n//        if (!isNull$248) {\n//            field$248 = accInput.getLong(2);\n//        }\n//        isNull$263 = accInput.isNullAt(3);\n//        field$263 = -1L;\n//        if (!isNull$263) {\n//            field$263 = accInput.getLong(3);\n//        }\n//\n//\n//        isNull$246 = agg0_count1IsNull || false;\n//        result$247 = -1L;\n//        if (!isNull$246) {\n//\n//            result$247 = (long) (agg0_count1 + ((long) 1L));\n//\n//        }\n//\n//        agg0_count1 = result$247;\n//        ;\n//        agg0_count1IsNull = isNull$246;\n//\n//\n//        long result$252 = -1L;\n//        boolean isNull$252;\n//        if (isNull$248) {\n//\n//            isNull$252 = agg1_sumIsNull;\n//            if (!isNull$252) {\n//                result$252 = agg1_sum;\n//            }\n//        } else {\n//            long result$251 = -1L;\n//            boolean isNull$251;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$251 = isNull$248;\n//                if (!isNull$251) {\n//                    result$251 = field$248;\n//                }\n//            } else {\n//\n//\n//                isNull$249 = agg1_sumIsNull || isNull$248;\n//                result$250 = -1L;\n//                if (!isNull$249) {\n//\n//                    result$250 = (long) (agg1_sum + field$248);\n//\n//                }\n//\n//                isNull$251 = isNull$249;\n//                if (!isNull$251) {\n//                    result$251 = result$250;\n//                }\n//            }\n//            isNull$252 = isNull$251;\n//            if (!isNull$252) {\n//                result$252 = result$251;\n//            }\n//        }\n//        agg1_sum = result$252;\n//        ;\n//        agg1_sumIsNull = isNull$252;\n//\n//\n//        long result$257 = -1L;\n//        boolean isNull$257;\n//        if (isNull$248) {\n//\n//            isNull$257 = agg2_maxIsNull;\n//            if (!isNull$257) {\n//                result$257 = agg2_max;\n//            }\n//        } else {\n//            long result$256 = -1L;\n//            boolean isNull$256;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$256 = isNull$248;\n//                if (!isNull$256) {\n//                    result$256 = field$248;\n//                }\n//            } else {\n//                isNull$253 = isNull$248 || agg2_maxIsNull;\n//                result$254 = false;\n//                if (!isNull$253) {\n//\n//                    result$254 = field$248 > agg2_max;\n//\n//                }\n//\n//                long result$255 = -1L;\n//                boolean isNull$255;\n//                if (result$254) {\n//\n//                    isNull$255 = isNull$248;\n//                    if (!isNull$255) {\n//                        result$255 = field$248;\n//                    }\n//                } else {\n//\n//                    isNull$255 = agg2_maxIsNull;\n//                    if (!isNull$255) {\n//                        result$255 = agg2_max;\n//                    }\n//                }\n//                isNull$256 = isNull$255;\n//                if (!isNull$256) {\n//                    result$256 = result$255;\n//                }\n//            }\n//            isNull$257 = isNull$256;\n//            if (!isNull$257) {\n//                result$257 = result$256;\n//            }\n//        }\n//        agg2_max = result$257;\n//        ;\n//        agg2_maxIsNull = isNull$257;\n//\n//\n//        long result$262 = -1L;\n//        boolean isNull$262;\n//        if (isNull$248) {\n//\n//            isNull$262 = agg3_minIsNull;\n//            if (!isNull$262) {\n//                result$262 = agg3_min;\n//            }\n//        } else {\n//            long result$261 = -1L;\n//            boolean isNull$261;\n//            if (agg3_minIsNull) {\n//\n//                isNull$261 = isNull$248;\n//                if (!isNull$261) {\n//                    result$261 = field$248;\n//                }\n//            } else {\n//                isNull$258 = isNull$248 || agg3_minIsNull;\n//                result$259 = false;\n//                if (!isNull$258) {\n//\n//                    result$259 = field$248 < agg3_min;\n//\n//                }\n//\n//                long result$260 = -1L;\n//                boolean isNull$260;\n//                if (result$259) {\n//\n//                    isNull$260 = isNull$248;\n//                    if (!isNull$260) {\n//                        result$260 = field$248;\n//                    }\n//                } else {\n//\n//                    isNull$260 = agg3_minIsNull;\n//                    if (!isNull$260) {\n//                        result$260 = agg3_min;\n//                    }\n//                }\n//                isNull$261 = isNull$260;\n//                if (!isNull$261) {\n//                    result$261 = result$260;\n//                }\n//            }\n//            isNull$262 = isNull$261;\n//            if (!isNull$262) {\n//                result$262 = result$261;\n//            }\n//        }\n//        agg3_min = result$262;\n//        ;\n//        agg3_minIsNull = isNull$262;\n//\n//\n//        java.lang.Long distinctKey$264 = (java.lang.Long) field$263;\n//        if (isNull$263) {\n//            distinctKey$264 = null;\n//        }\n//\n//        java.lang.Long value$268 = (java.lang.Long) distinct_view_0.get(distinctKey$264);\n//        if (value$268 == null) {\n//            value$268 = 0L;\n//        }\n//\n//        boolean is_distinct_value_changed_0 = false;\n//\n//        long existed$269 = ((long) value$268) & (1L << 0);\n//        if (existed$269 == 0) {  // not existed\n//            value$268 = ((long) value$268) | (1L << 0);\n//            is_distinct_value_changed_0 = true;\n//\n//            long result$267 = -1L;\n//            boolean isNull$267;\n//            if (isNull$263) {\n//\n//                isNull$267 = agg4_countIsNull;\n//                if (!isNull$267) {\n//                    result$267 = agg4_count;\n//                }\n//            } else {\n//\n//\n//                isNull$265 = agg4_countIsNull || false;\n//                result$266 = -1L;\n//                if (!isNull$265) {\n//\n//                    result$266 = (long) (agg4_count + ((long) 1L));\n//\n//                }\n//\n//                isNull$267 = isNull$265;\n//                if (!isNull$267) {\n//                    result$267 = result$266;\n//                }\n//            }\n//            agg4_count = result$267;\n//            ;\n//            agg4_countIsNull = isNull$267;\n//\n//        }\n//\n//        if (is_distinct_value_changed_0) {\n//            distinct_view_0.put(distinctKey$264, value$268);\n//        }\n//\n//\n//    }\n//\n//    @Override\n//    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n//\n//        throw new java.lang.RuntimeException(\n//                \"This function not require retract method, but the retract method is called.\");\n//\n//    }\n//\n//    @Override\n//    public void merge(Object ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$270;\n//        boolean isNull$270;\n//        boolean isNull$271;\n//        long result$272;\n//        long field$273;\n//        boolean isNull$273;\n//        boolean isNull$274;\n//        long result$275;\n//        long field$278;\n//        boolean isNull$278;\n//        boolean isNull$279;\n//        boolean result$280;\n//        long field$284;\n//        boolean isNull$284;\n//        boolean isNull$285;\n//        boolean result$286;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$290;\n//        boolean isNull$290;\n//        boolean isNull$294;\n//        long result$295;\n//        isNull$278 = otherAcc.isNullAt(2);\n//        field$278 = -1L;\n//        if (!isNull$278) {\n//            field$278 = otherAcc.getLong(2);\n//        }\n//        isNull$273 = otherAcc.isNullAt(1);\n//        field$273 = -1L;\n//        if (!isNull$273) {\n//            field$273 = otherAcc.getLong(1);\n//        }\n//        isNull$270 = otherAcc.isNullAt(0);\n//        field$270 = -1L;\n//        if (!isNull$270) {\n//            field$270 = otherAcc.getLong(0);\n//        }\n//        isNull$284 = otherAcc.isNullAt(3);\n//        field$284 = -1L;\n//        if (!isNull$284) {\n//            field$284 = otherAcc.getLong(3);\n//        }\n//\n//        // when namespace is null, the dataview is used in heap, no key and namespace set\n//        if (namespace != null) {\n//            distinctAcc_0_dataview_backup.setCurrentNamespace(namespace);\n//            distinct_backup_view_0 = distinctAcc_0_dataview_backup;\n//        } else {\n//            isNull$290 = otherAcc.isNullAt(5);\n//            field$290 = null;\n//            if (!isNull$290) {\n//                field$290 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(5));\n//            }\n//            distinct_backup_view_0 = (org.apache.flink.table.api.dataview.MapView) field$290.getJavaObject();\n//        }\n//\n//\n//        isNull$271 = agg0_count1IsNull || isNull$270;\n//        result$272 = -1L;\n//        if (!isNull$271) {\n//\n//            result$272 = (long) (agg0_count1 + field$270);\n//\n//        }\n//\n//        agg0_count1 = result$272;\n//        ;\n//        agg0_count1IsNull = isNull$271;\n//\n//\n//        long result$277 = -1L;\n//        boolean isNull$277;\n//        if (isNull$273) {\n//\n//            isNull$277 = agg1_sumIsNull;\n//            if (!isNull$277) {\n//                result$277 = agg1_sum;\n//            }\n//        } else {\n//            long result$276 = -1L;\n//            boolean isNull$276;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$276 = isNull$273;\n//                if (!isNull$276) {\n//                    result$276 = field$273;\n//                }\n//            } else {\n//\n//\n//                isNull$274 = agg1_sumIsNull || isNull$273;\n//                result$275 = -1L;\n//                if (!isNull$274) {\n//\n//                    result$275 = (long) (agg1_sum + field$273);\n//\n//                }\n//\n//                isNull$276 = isNull$274;\n//                if (!isNull$276) {\n//                    result$276 = result$275;\n//                }\n//            }\n//            isNull$277 = isNull$276;\n//            if (!isNull$277) {\n//                result$277 = result$276;\n//            }\n//        }\n//        agg1_sum = result$277;\n//        ;\n//        agg1_sumIsNull = isNull$277;\n//\n//\n//        long result$283 = -1L;\n//        boolean isNull$283;\n//        if (isNull$278) {\n//\n//            isNull$283 = agg2_maxIsNull;\n//            if (!isNull$283) {\n//                result$283 = agg2_max;\n//            }\n//        } else {\n//            long result$282 = -1L;\n//            boolean isNull$282;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$282 = isNull$278;\n//                if (!isNull$282) {\n//                    result$282 = field$278;\n//                }\n//            } else {\n//                isNull$279 = isNull$278 || agg2_maxIsNull;\n//                result$280 = false;\n//                if (!isNull$279) {\n//\n//                    result$280 = field$278 > agg2_max;\n//\n//                }\n//\n//                long result$281 = -1L;\n//                boolean isNull$281;\n//                if (result$280) {\n//\n//                    isNull$281 = isNull$278;\n//                    if (!isNull$281) {\n//                        result$281 = field$278;\n//                    }\n//                } else {\n//\n//                    isNull$281 = agg2_maxIsNull;\n//                    if (!isNull$281) {\n//                        result$281 = agg2_max;\n//                    }\n//                }\n//                isNull$282 = isNull$281;\n//                if (!isNull$282) {\n//                    result$282 = result$281;\n//                }\n//            }\n//            isNull$283 = isNull$282;\n//            if (!isNull$283) {\n//                result$283 = result$282;\n//            }\n//        }\n//        agg2_max = result$283;\n//        ;\n//        agg2_maxIsNull = isNull$283;\n//\n//\n//        long result$289 = -1L;\n//        boolean isNull$289;\n//        if (isNull$284) {\n//\n//            isNull$289 = agg3_minIsNull;\n//            if (!isNull$289) {\n//                result$289 = agg3_min;\n//            }\n//        } else {\n//            long result$288 = -1L;\n//            boolean isNull$288;\n//            if (agg3_minIsNull) {\n//\n//                isNull$288 = isNull$284;\n//                if (!isNull$288) {\n//                    result$288 = field$284;\n//                }\n//            } else {\n//                isNull$285 = isNull$284 || agg3_minIsNull;\n//                result$286 = false;\n//                if (!isNull$285) {\n//\n//                    result$286 = field$284 < agg3_min;\n//\n//                }\n//\n//                long result$287 = -1L;\n//                boolean isNull$287;\n//                if (result$286) {\n//\n//                    isNull$287 = isNull$284;\n//                    if (!isNull$287) {\n//                        result$287 = field$284;\n//                    }\n//                } else {\n//\n//                    isNull$287 = agg3_minIsNull;\n//                    if (!isNull$287) {\n//                        result$287 = agg3_min;\n//                    }\n//                }\n//                isNull$288 = isNull$287;\n//                if (!isNull$288) {\n//                    result$288 = result$287;\n//                }\n//            }\n//            isNull$289 = isNull$288;\n//            if (!isNull$289) {\n//                result$289 = result$288;\n//            }\n//        }\n//        agg3_min = result$289;\n//        ;\n//        agg3_minIsNull = isNull$289;\n//\n//\n//        java.lang.Iterable<java.util.Map.Entry> otherEntries$297 =\n//                (java.lang.Iterable<java.util.Map.Entry>) distinct_backup_view_0.entries();\n//        if (otherEntries$297 != null) {\n//            for (java.util.Map.Entry entry : otherEntries$297) {\n//                java.lang.Long distinctKey$291 = (java.lang.Long) entry.getKey();\n//                long field$292 = -1L;\n//                boolean isNull$293 = true;\n//                if (distinctKey$291 != null) {\n//                    isNull$293 = false;\n//                    field$292 = (long) distinctKey$291;\n//                }\n//                java.lang.Long otherValue = (java.lang.Long) entry.getValue();\n//                java.lang.Long thisValue = (java.lang.Long) distinct_view_0.get(distinctKey$291);\n//                if (thisValue == null) {\n//                    thisValue = 0L;\n//                }\n//                boolean is_distinct_value_changed_0 = false;\n//                boolean is_distinct_value_empty_0 = false;\n//\n//\n//                long existed$298 = ((long) thisValue) & (1L << 0);\n//                if (existed$298 == 0) {  // not existed\n//                    long otherExisted = ((long) otherValue) & (1L << 0);\n//                    if (otherExisted != 0) {  // existed in other\n//                        is_distinct_value_changed_0 = true;\n//                        // do accumulate\n//\n//                        long result$296 = -1L;\n//                        boolean isNull$296;\n//                        if (isNull$293) {\n//\n//                            isNull$296 = agg4_countIsNull;\n//                            if (!isNull$296) {\n//                                result$296 = agg4_count;\n//                            }\n//                        } else {\n//\n//\n//                            isNull$294 = agg4_countIsNull || false;\n//                            result$295 = -1L;\n//                            if (!isNull$294) {\n//\n//                                result$295 = (long) (agg4_count + ((long) 1L));\n//\n//                            }\n//\n//                            isNull$296 = isNull$294;\n//                            if (!isNull$296) {\n//                                result$296 = result$295;\n//                            }\n//                        }\n//                        agg4_count = result$296;\n//                        ;\n//                        agg4_countIsNull = isNull$296;\n//\n//                    }\n//                }\n//\n//                thisValue = ((long) thisValue) | ((long) otherValue);\n//                is_distinct_value_empty_0 = false;\n//\n//                if (is_distinct_value_empty_0) {\n//                    distinct_view_0.remove(distinctKey$291);\n//                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n//                    distinct_view_0.put(distinctKey$291, thisValue);\n//                }\n//            } // end foreach\n//        } // end otherEntries != null\n//\n//\n//    }\n//\n//    @Override\n//    public void setAccumulators(Object ns, org.apache.flink.table.data.RowData acc)\n//            throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$240;\n//        boolean isNull$240;\n//        long field$241;\n//        boolean isNull$241;\n//        long field$242;\n//        boolean isNull$242;\n//        long field$243;\n//        boolean isNull$243;\n//        long field$244;\n//        boolean isNull$244;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$245;\n//        boolean isNull$245;\n//        isNull$244 = acc.isNullAt(4);\n//        field$244 = -1L;\n//        if (!isNull$244) {\n//            field$244 = acc.getLong(4);\n//        }\n//        isNull$240 = acc.isNullAt(0);\n//        field$240 = -1L;\n//        if (!isNull$240) {\n//            field$240 = acc.getLong(0);\n//        }\n//        isNull$241 = acc.isNullAt(1);\n//        field$241 = -1L;\n//        if (!isNull$241) {\n//            field$241 = acc.getLong(1);\n//        }\n//        isNull$243 = acc.isNullAt(3);\n//        field$243 = -1L;\n//        if (!isNull$243) {\n//            field$243 = acc.getLong(3);\n//        }\n//\n//        // when namespace is null, the dataview is used in heap, no key and namespace set\n//        if (namespace != null) {\n//            distinctAcc_0_dataview.setCurrentNamespace(namespace);\n//            distinct_view_0 = distinctAcc_0_dataview;\n//        } else {\n//            isNull$245 = acc.isNullAt(5);\n//            field$245 = null;\n//            if (!isNull$245) {\n//                field$245 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n//            }\n//            distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$245.getJavaObject();\n//        }\n//\n//        isNull$242 = acc.isNullAt(2);\n//        field$242 = -1L;\n//        if (!isNull$242) {\n//            field$242 = acc.getLong(2);\n//        }\n//\n//        agg0_count1 = field$240;\n//        ;\n//        agg0_count1IsNull = isNull$240;\n//\n//\n//        agg1_sum = field$241;\n//        ;\n//        agg1_sumIsNull = isNull$241;\n//\n//\n//        agg2_max = field$242;\n//        ;\n//        agg2_maxIsNull = isNull$242;\n//\n//\n//        agg3_min = field$243;\n//        ;\n//        agg3_minIsNull = isNull$243;\n//\n//\n//        agg4_count = field$244;\n//        ;\n//        agg4_countIsNull = isNull$244;\n//\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n//\n//\n//        acc$239 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (agg0_count1IsNull) {\n//            acc$239.setField(0, null);\n//        } else {\n//            acc$239.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            acc$239.setField(1, null);\n//        } else {\n//            acc$239.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            acc$239.setField(2, null);\n//        } else {\n//            acc$239.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            acc$239.setField(3, null);\n//        } else {\n//            acc$239.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            acc$239.setField(4, null);\n//        } else {\n//            acc$239.setField(4, agg4_count);\n//        }\n//\n//\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$238 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n//\n//        if (false) {\n//            acc$239.setField(5, null);\n//        } else {\n//            acc$239.setField(5, distinct_acc$238);\n//        }\n//\n//\n//        return acc$239;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n//\n//\n//        acc$237 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (false) {\n//            acc$237.setField(0, null);\n//        } else {\n//            acc$237.setField(0, ((long) 0L));\n//        }\n//\n//\n//        if (true) {\n//            acc$237.setField(1, null);\n//        } else {\n//            acc$237.setField(1, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$237.setField(2, null);\n//        } else {\n//            acc$237.setField(2, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$237.setField(3, null);\n//        } else {\n//            acc$237.setField(3, ((long) -1L));\n//        }\n//\n//\n//        if (false) {\n//            acc$237.setField(4, null);\n//        } else {\n//            acc$237.setField(4, ((long) 0L));\n//        }\n//\n//\n//        org.apache.flink.table.api.dataview.MapView mapview$236 = new org.apache.flink.table.api.dataview.MapView();\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$236 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$236);\n//\n//        if (false) {\n//            acc$237.setField(5, null);\n//        } else {\n//            acc$237.setField(5, distinct_acc$236);\n//        }\n//\n//\n//        return acc$237;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getValue(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//\n//        aggValue$299 = new org.apache.flink.table.data.GenericRowData(7);\n//\n//\n//        if (agg0_count1IsNull) {\n//            aggValue$299.setField(0, null);\n//        } else {\n//            aggValue$299.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            aggValue$299.setField(1, null);\n//        } else {\n//            aggValue$299.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            aggValue$299.setField(2, null);\n//        } else {\n//            aggValue$299.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            aggValue$299.setField(3, null);\n//        } else {\n//            aggValue$299.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            aggValue$299.setField(4, null);\n//        } else {\n//            aggValue$299.setField(4, agg4_count);\n//        }\n//\n//\n//        if (false) {\n//            aggValue$299.setField(5, null);\n//        } else {\n//            aggValue$299.setField(5, org.apache.flink.table.data.TimestampData\n//                    .fromEpochMillis(sliceAssigner$233.getWindowStart(namespace)));\n//        }\n//\n//\n//        if (false) {\n//            aggValue$299.setField(6, null);\n//        } else {\n//            aggValue$299.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace));\n//        }\n//\n//\n//        return aggValue$299;\n//\n//    }\n//\n//    @Override\n//    public void cleanup(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        distinctAcc_0_dataview.setCurrentNamespace(namespace);\n//        distinctAcc_0_dataview.clear();\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/cumulate/local_agg/KeyProjection$89.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.cumulate.local_agg;\n\n\npublic class KeyProjection$89 implements\n        org.apache.flink.table.runtime.generated.Projection<org.apache.flink.table.data.RowData,\n                org.apache.flink.table.data.binary.BinaryRowData> {\n\n    org.apache.flink.table.data.binary.BinaryRowData out = new org.apache.flink.table.data.binary.BinaryRowData(2);\n    org.apache.flink.table.data.writer.BinaryRowWriter outWriter =\n            new org.apache.flink.table.data.writer.BinaryRowWriter(out);\n\n    public KeyProjection$89(Object[] references) throws Exception {\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.binary.BinaryRowData apply(org.apache.flink.table.data.RowData in1) {\n        org.apache.flink.table.data.binary.BinaryStringData field$90;\n        boolean isNull$90;\n        int field$91;\n        boolean isNull$91;\n\n\n        outWriter.reset();\n\n        isNull$90 = in1.isNullAt(0);\n        field$90 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n        if (!isNull$90) {\n            field$90 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(0));\n        }\n        if (isNull$90) {\n            outWriter.setNullAt(0);\n        } else {\n            outWriter.writeString(0, field$90);\n        }\n\n\n        isNull$91 = in1.isNullAt(1);\n        field$91 = -1;\n        if (!isNull$91) {\n            field$91 = in1.getInt(1);\n        }\n        if (isNull$91) {\n            outWriter.setNullAt(1);\n        } else {\n            outWriter.writeInt(1, field$91);\n        }\n\n        outWriter.complete();\n\n\n        return out;\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/cumulate/local_agg/LocalWindowAggsHandler$88.java",
    "content": "//package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.cumulate.local_agg;\n//\n//\n//public final class LocalWindowAggsHandler$88\n//        implements org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<java.lang.Long> {\n//\n//    private transient org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.CumulativeSliceAssigner\n//            sliceAssigner$21;\n//    long agg0_count1;\n//    boolean agg0_count1IsNull;\n//    long agg1_sum;\n//    boolean agg1_sumIsNull;\n//    long agg2_max;\n//    boolean agg2_maxIsNull;\n//    long agg3_min;\n//    boolean agg3_minIsNull;\n//    long agg4_count;\n//    boolean agg4_countIsNull;\n//    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n//    org.apache.flink.table.data.GenericRowData acc$23 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n//    private org.apache.flink.table.api.dataview.MapView otherMapView$77;\n//    private transient org.apache.flink.table.data.conversion.RawObjectConverter converter$78;\n//    org.apache.flink.table.data.GenericRowData aggValue$87 = new org.apache.flink.table.data.GenericRowData(5);\n//\n//    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n//\n//    private java.lang.Long namespace;\n//\n//    public LocalWindowAggsHandler$88(Object[] references) throws Exception {\n//        sliceAssigner$21 =\n//                (((org.apache.flink.table.runtime.operators.window.slicing.SliceAssigners.CumulativeSliceAssigner) references[0]));\n//        converter$78 = (((org.apache.flink.table.data.conversion.RawObjectConverter) references[1]));\n//    }\n//\n//    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n//        return store.getRuntimeContext();\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n//        this.store = store;\n//\n//        converter$78.open(getRuntimeContext().getUserCodeClassLoader());\n//\n//    }\n//\n//    @Override\n//    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n//\n//        boolean isNull$32;\n//        long result$33;\n//        long field$34;\n//        boolean isNull$34;\n//        boolean isNull$35;\n//        long result$36;\n//        boolean isNull$39;\n//        boolean result$40;\n//        boolean isNull$44;\n//        boolean result$45;\n//        long field$49;\n//        boolean isNull$49;\n//        boolean isNull$51;\n//        long result$52;\n//        isNull$34 = accInput.isNullAt(2);\n//        field$34 = -1L;\n//        if (!isNull$34) {\n//            field$34 = accInput.getLong(2);\n//        }\n//        isNull$49 = accInput.isNullAt(3);\n//        field$49 = -1L;\n//        if (!isNull$49) {\n//            field$49 = accInput.getLong(3);\n//        }\n//\n//\n//        isNull$32 = agg0_count1IsNull || false;\n//        result$33 = -1L;\n//        if (!isNull$32) {\n//\n//            result$33 = (long) (agg0_count1 + ((long) 1L));\n//\n//        }\n//\n//        agg0_count1 = result$33;\n//        ;\n//        agg0_count1IsNull = isNull$32;\n//\n//\n//        long result$38 = -1L;\n//        boolean isNull$38;\n//        if (isNull$34) {\n//\n//            isNull$38 = agg1_sumIsNull;\n//            if (!isNull$38) {\n//                result$38 = agg1_sum;\n//            }\n//        } else {\n//            long result$37 = -1L;\n//            boolean isNull$37;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$37 = isNull$34;\n//                if (!isNull$37) {\n//                    result$37 = field$34;\n//                }\n//            } else {\n//\n//\n//                isNull$35 = agg1_sumIsNull || isNull$34;\n//                result$36 = -1L;\n//                if (!isNull$35) {\n//\n//                    result$36 = (long) (agg1_sum + field$34);\n//\n//                }\n//\n//                isNull$37 = isNull$35;\n//                if (!isNull$37) {\n//                    result$37 = result$36;\n//                }\n//            }\n//            isNull$38 = isNull$37;\n//            if (!isNull$38) {\n//                result$38 = result$37;\n//            }\n//        }\n//        agg1_sum = result$38;\n//        ;\n//        agg1_sumIsNull = isNull$38;\n//\n//\n//        long result$43 = -1L;\n//        boolean isNull$43;\n//        if (isNull$34) {\n//\n//            isNull$43 = agg2_maxIsNull;\n//            if (!isNull$43) {\n//                result$43 = agg2_max;\n//            }\n//        } else {\n//            long result$42 = -1L;\n//            boolean isNull$42;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$42 = isNull$34;\n//                if (!isNull$42) {\n//                    result$42 = field$34;\n//                }\n//            } else {\n//                isNull$39 = isNull$34 || agg2_maxIsNull;\n//                result$40 = false;\n//                if (!isNull$39) {\n//\n//                    result$40 = field$34 > agg2_max;\n//\n//                }\n//\n//                long result$41 = -1L;\n//                boolean isNull$41;\n//                if (result$40) {\n//\n//                    isNull$41 = isNull$34;\n//                    if (!isNull$41) {\n//                        result$41 = field$34;\n//                    }\n//                } else {\n//\n//                    isNull$41 = agg2_maxIsNull;\n//                    if (!isNull$41) {\n//                        result$41 = agg2_max;\n//                    }\n//                }\n//                isNull$42 = isNull$41;\n//                if (!isNull$42) {\n//                    result$42 = result$41;\n//                }\n//            }\n//            isNull$43 = isNull$42;\n//            if (!isNull$43) {\n//                result$43 = result$42;\n//            }\n//        }\n//        agg2_max = result$43;\n//        ;\n//        agg2_maxIsNull = isNull$43;\n//\n//\n//        long result$48 = -1L;\n//        boolean isNull$48;\n//        if (isNull$34) {\n//\n//            isNull$48 = agg3_minIsNull;\n//            if (!isNull$48) {\n//                result$48 = agg3_min;\n//            }\n//        } else {\n//            long result$47 = -1L;\n//            boolean isNull$47;\n//            if (agg3_minIsNull) {\n//\n//                isNull$47 = isNull$34;\n//                if (!isNull$47) {\n//                    result$47 = field$34;\n//                }\n//            } else {\n//                isNull$44 = isNull$34 || agg3_minIsNull;\n//                result$45 = false;\n//                if (!isNull$44) {\n//\n//                    result$45 = field$34 < agg3_min;\n//\n//                }\n//\n//                long result$46 = -1L;\n//                boolean isNull$46;\n//                if (result$45) {\n//\n//                    isNull$46 = isNull$34;\n//                    if (!isNull$46) {\n//                        result$46 = field$34;\n//                    }\n//                } else {\n//\n//                    isNull$46 = agg3_minIsNull;\n//                    if (!isNull$46) {\n//                        result$46 = agg3_min;\n//                    }\n//                }\n//                isNull$47 = isNull$46;\n//                if (!isNull$47) {\n//                    result$47 = result$46;\n//                }\n//            }\n//            isNull$48 = isNull$47;\n//            if (!isNull$48) {\n//                result$48 = result$47;\n//            }\n//        }\n//        agg3_min = result$48;\n//        ;\n//        agg3_minIsNull = isNull$48;\n//\n//\n//        java.lang.Long distinctKey$50 = (java.lang.Long) field$49;\n//        if (isNull$49) {\n//            distinctKey$50 = null;\n//        }\n//\n//        java.lang.Long value$54 = (java.lang.Long) distinct_view_0.get(distinctKey$50);\n//        if (value$54 == null) {\n//            value$54 = 0L;\n//        }\n//\n//        boolean is_distinct_value_changed_0 = false;\n//\n//        long existed$55 = ((long) value$54) & (1L << 0);\n//        if (existed$55 == 0) {  // not existed\n//            value$54 = ((long) value$54) | (1L << 0);\n//            is_distinct_value_changed_0 = true;\n//\n//            long result$53 = -1L;\n//            boolean isNull$53;\n//            if (isNull$49) {\n//\n//                isNull$53 = agg4_countIsNull;\n//                if (!isNull$53) {\n//                    result$53 = agg4_count;\n//                }\n//            } else {\n//\n//\n//                isNull$51 = agg4_countIsNull || false;\n//                result$52 = -1L;\n//                if (!isNull$51) {\n//\n//                    result$52 = (long) (agg4_count + ((long) 1L));\n//\n//                }\n//\n//                isNull$53 = isNull$51;\n//                if (!isNull$53) {\n//                    result$53 = result$52;\n//                }\n//            }\n//            agg4_count = result$53;\n//            ;\n//            agg4_countIsNull = isNull$53;\n//\n//        }\n//\n//        if (is_distinct_value_changed_0) {\n//            distinct_view_0.put(distinctKey$50, value$54);\n//        }\n//\n//\n//    }\n//\n//    @Override\n//    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n//\n//        throw new java.lang.RuntimeException(\n//                \"This function not require retract method, but the retract method is called.\");\n//\n//    }\n//\n//    @Override\n//    public void merge(Object ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$56;\n//        boolean isNull$56;\n//        boolean isNull$57;\n//        long result$58;\n//        long field$59;\n//        boolean isNull$59;\n//        boolean isNull$60;\n//        long result$61;\n//        long field$64;\n//        boolean isNull$64;\n//        boolean isNull$65;\n//        boolean result$66;\n//        long field$70;\n//        boolean isNull$70;\n//        boolean isNull$71;\n//        boolean result$72;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$76;\n//        boolean isNull$76;\n//        boolean isNull$82;\n//        long result$83;\n//        isNull$64 = otherAcc.isNullAt(2);\n//        field$64 = -1L;\n//        if (!isNull$64) {\n//            field$64 = otherAcc.getLong(2);\n//        }\n//        isNull$59 = otherAcc.isNullAt(1);\n//        field$59 = -1L;\n//        if (!isNull$59) {\n//            field$59 = otherAcc.getLong(1);\n//        }\n//        isNull$56 = otherAcc.isNullAt(0);\n//        field$56 = -1L;\n//        if (!isNull$56) {\n//            field$56 = otherAcc.getLong(0);\n//        }\n//        isNull$70 = otherAcc.isNullAt(3);\n//        field$70 = -1L;\n//        if (!isNull$70) {\n//            field$70 = otherAcc.getLong(3);\n//        }\n//\n//        isNull$76 = otherAcc.isNullAt(5);\n//        field$76 = null;\n//        if (!isNull$76) {\n//            field$76 = ((org.apache.flink.table.data.binary.BinaryRawValueData) otherAcc.getRawValue(5));\n//        }\n//        otherMapView$77 = null;\n//        if (!isNull$76) {\n//            otherMapView$77 =\n//                    (org.apache.flink.table.api.dataview.MapView) converter$78\n//                            .toExternal((org.apache.flink.table.data.binary.BinaryRawValueData) field$76);\n//        }\n//\n//\n//        isNull$57 = agg0_count1IsNull || isNull$56;\n//        result$58 = -1L;\n//        if (!isNull$57) {\n//\n//            result$58 = (long) (agg0_count1 + field$56);\n//\n//        }\n//\n//        agg0_count1 = result$58;\n//        ;\n//        agg0_count1IsNull = isNull$57;\n//\n//\n//        long result$63 = -1L;\n//        boolean isNull$63;\n//        if (isNull$59) {\n//\n//            isNull$63 = agg1_sumIsNull;\n//            if (!isNull$63) {\n//                result$63 = agg1_sum;\n//            }\n//        } else {\n//            long result$62 = -1L;\n//            boolean isNull$62;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$62 = isNull$59;\n//                if (!isNull$62) {\n//                    result$62 = field$59;\n//                }\n//            } else {\n//\n//\n//                isNull$60 = agg1_sumIsNull || isNull$59;\n//                result$61 = -1L;\n//                if (!isNull$60) {\n//\n//                    result$61 = (long) (agg1_sum + field$59);\n//\n//                }\n//\n//                isNull$62 = isNull$60;\n//                if (!isNull$62) {\n//                    result$62 = result$61;\n//                }\n//            }\n//            isNull$63 = isNull$62;\n//            if (!isNull$63) {\n//                result$63 = result$62;\n//            }\n//        }\n//        agg1_sum = result$63;\n//        ;\n//        agg1_sumIsNull = isNull$63;\n//\n//\n//        long result$69 = -1L;\n//        boolean isNull$69;\n//        if (isNull$64) {\n//\n//            isNull$69 = agg2_maxIsNull;\n//            if (!isNull$69) {\n//                result$69 = agg2_max;\n//            }\n//        } else {\n//            long result$68 = -1L;\n//            boolean isNull$68;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$68 = isNull$64;\n//                if (!isNull$68) {\n//                    result$68 = field$64;\n//                }\n//            } else {\n//                isNull$65 = isNull$64 || agg2_maxIsNull;\n//                result$66 = false;\n//                if (!isNull$65) {\n//\n//                    result$66 = field$64 > agg2_max;\n//\n//                }\n//\n//                long result$67 = -1L;\n//                boolean isNull$67;\n//                if (result$66) {\n//\n//                    isNull$67 = isNull$64;\n//                    if (!isNull$67) {\n//                        result$67 = field$64;\n//                    }\n//                } else {\n//\n//                    isNull$67 = agg2_maxIsNull;\n//                    if (!isNull$67) {\n//                        result$67 = agg2_max;\n//                    }\n//                }\n//                isNull$68 = isNull$67;\n//                if (!isNull$68) {\n//                    result$68 = result$67;\n//                }\n//            }\n//            isNull$69 = isNull$68;\n//            if (!isNull$69) {\n//                result$69 = result$68;\n//            }\n//        }\n//        agg2_max = result$69;\n//        ;\n//        agg2_maxIsNull = isNull$69;\n//\n//\n//        long result$75 = -1L;\n//        boolean isNull$75;\n//        if (isNull$70) {\n//\n//            isNull$75 = agg3_minIsNull;\n//            if (!isNull$75) {\n//                result$75 = agg3_min;\n//            }\n//        } else {\n//            long result$74 = -1L;\n//            boolean isNull$74;\n//            if (agg3_minIsNull) {\n//\n//                isNull$74 = isNull$70;\n//                if (!isNull$74) {\n//                    result$74 = field$70;\n//                }\n//            } else {\n//                isNull$71 = isNull$70 || agg3_minIsNull;\n//                result$72 = false;\n//                if (!isNull$71) {\n//\n//                    result$72 = field$70 < agg3_min;\n//\n//                }\n//\n//                long result$73 = -1L;\n//                boolean isNull$73;\n//                if (result$72) {\n//\n//                    isNull$73 = isNull$70;\n//                    if (!isNull$73) {\n//                        result$73 = field$70;\n//                    }\n//                } else {\n//\n//                    isNull$73 = agg3_minIsNull;\n//                    if (!isNull$73) {\n//                        result$73 = agg3_min;\n//                    }\n//                }\n//                isNull$74 = isNull$73;\n//                if (!isNull$74) {\n//                    result$74 = result$73;\n//                }\n//            }\n//            isNull$75 = isNull$74;\n//            if (!isNull$75) {\n//                result$75 = result$74;\n//            }\n//        }\n//        agg3_min = result$75;\n//        ;\n//        agg3_minIsNull = isNull$75;\n//\n//\n//        java.lang.Iterable<java.util.Map.Entry> otherEntries$85 =\n//                (java.lang.Iterable<java.util.Map.Entry>) otherMapView$77.entries();\n//        if (otherEntries$85 != null) {\n//            for (java.util.Map.Entry entry : otherEntries$85) {\n//                java.lang.Long distinctKey$79 = (java.lang.Long) entry.getKey();\n//                long field$80 = -1L;\n//                boolean isNull$81 = true;\n//                if (distinctKey$79 != null) {\n//                    isNull$81 = false;\n//                    field$80 = (long) distinctKey$79;\n//                }\n//                java.lang.Long otherValue = (java.lang.Long) entry.getValue();\n//                java.lang.Long thisValue = (java.lang.Long) distinct_view_0.get(distinctKey$79);\n//                if (thisValue == null) {\n//                    thisValue = 0L;\n//                }\n//                boolean is_distinct_value_changed_0 = false;\n//                boolean is_distinct_value_empty_0 = false;\n//\n//\n//                long existed$86 = ((long) thisValue) & (1L << 0);\n//                if (existed$86 == 0) {  // not existed\n//                    long otherExisted = ((long) otherValue) & (1L << 0);\n//                    if (otherExisted != 0) {  // existed in other\n//                        is_distinct_value_changed_0 = true;\n//                        // do accumulate\n//\n//                        long result$84 = -1L;\n//                        boolean isNull$84;\n//                        if (isNull$81) {\n//\n//                            isNull$84 = agg4_countIsNull;\n//                            if (!isNull$84) {\n//                                result$84 = agg4_count;\n//                            }\n//                        } else {\n//\n//\n//                            isNull$82 = agg4_countIsNull || false;\n//                            result$83 = -1L;\n//                            if (!isNull$82) {\n//\n//                                result$83 = (long) (agg4_count + ((long) 1L));\n//\n//                            }\n//\n//                            isNull$84 = isNull$82;\n//                            if (!isNull$84) {\n//                                result$84 = result$83;\n//                            }\n//                        }\n//                        agg4_count = result$84;\n//                        ;\n//                        agg4_countIsNull = isNull$84;\n//\n//                    }\n//                }\n//\n//                thisValue = ((long) thisValue) | ((long) otherValue);\n//                is_distinct_value_empty_0 = false;\n//\n//                if (is_distinct_value_empty_0) {\n//                    distinct_view_0.remove(distinctKey$79);\n//                } else if (is_distinct_value_changed_0) { // value is not empty and is changed, do update\n//                    distinct_view_0.put(distinctKey$79, thisValue);\n//                }\n//            } // end foreach\n//        } // end otherEntries != null\n//\n//\n//    }\n//\n//    @Override\n//    public void setAccumulators(Object ns, org.apache.flink.table.data.RowData acc)\n//            throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//        long field$26;\n//        boolean isNull$26;\n//        long field$27;\n//        boolean isNull$27;\n//        long field$28;\n//        boolean isNull$28;\n//        long field$29;\n//        boolean isNull$29;\n//        long field$30;\n//        boolean isNull$30;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$31;\n//        boolean isNull$31;\n//        isNull$30 = acc.isNullAt(4);\n//        field$30 = -1L;\n//        if (!isNull$30) {\n//            field$30 = acc.getLong(4);\n//        }\n//        isNull$26 = acc.isNullAt(0);\n//        field$26 = -1L;\n//        if (!isNull$26) {\n//            field$26 = acc.getLong(0);\n//        }\n//        isNull$27 = acc.isNullAt(1);\n//        field$27 = -1L;\n//        if (!isNull$27) {\n//            field$27 = acc.getLong(1);\n//        }\n//        isNull$29 = acc.isNullAt(3);\n//        field$29 = -1L;\n//        if (!isNull$29) {\n//            field$29 = acc.getLong(3);\n//        }\n//\n//        isNull$31 = acc.isNullAt(5);\n//        field$31 = null;\n//        if (!isNull$31) {\n//            field$31 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n//        }\n//        distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$31.getJavaObject();\n//\n//        isNull$28 = acc.isNullAt(2);\n//        field$28 = -1L;\n//        if (!isNull$28) {\n//            field$28 = acc.getLong(2);\n//        }\n//\n//        agg0_count1 = field$26;\n//        ;\n//        agg0_count1IsNull = isNull$26;\n//\n//\n//        agg1_sum = field$27;\n//        ;\n//        agg1_sumIsNull = isNull$27;\n//\n//\n//        agg2_max = field$28;\n//        ;\n//        agg2_maxIsNull = isNull$28;\n//\n//\n//        agg3_min = field$29;\n//        ;\n//        agg3_minIsNull = isNull$29;\n//\n//\n//        agg4_count = field$30;\n//        ;\n//        agg4_countIsNull = isNull$30;\n//\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n//\n//\n//        acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (agg0_count1IsNull) {\n//            acc$25.setField(0, null);\n//        } else {\n//            acc$25.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            acc$25.setField(1, null);\n//        } else {\n//            acc$25.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            acc$25.setField(2, null);\n//        } else {\n//            acc$25.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            acc$25.setField(3, null);\n//        } else {\n//            acc$25.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            acc$25.setField(4, null);\n//        } else {\n//            acc$25.setField(4, agg4_count);\n//        }\n//\n//\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$24 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n//\n//        if (false) {\n//            acc$25.setField(5, null);\n//        } else {\n//            acc$25.setField(5, distinct_acc$24);\n//        }\n//\n//\n//        return acc$25;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n//\n//\n//        acc$23 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (false) {\n//            acc$23.setField(0, null);\n//        } else {\n//            acc$23.setField(0, ((long) 0L));\n//        }\n//\n//\n//        if (true) {\n//            acc$23.setField(1, null);\n//        } else {\n//            acc$23.setField(1, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$23.setField(2, null);\n//        } else {\n//            acc$23.setField(2, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$23.setField(3, null);\n//        } else {\n//            acc$23.setField(3, ((long) -1L));\n//        }\n//\n//\n//        if (false) {\n//            acc$23.setField(4, null);\n//        } else {\n//            acc$23.setField(4, ((long) 0L));\n//        }\n//\n//\n//        org.apache.flink.table.api.dataview.MapView mapview$22 = new org.apache.flink.table.api.dataview.MapView();\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$22 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$22);\n//\n//        if (false) {\n//            acc$23.setField(5, null);\n//        } else {\n//            acc$23.setField(5, distinct_acc$22);\n//        }\n//\n//\n//        return acc$23;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getValue(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//\n//        aggValue$87 = new org.apache.flink.table.data.GenericRowData(5);\n//\n//\n//        if (agg0_count1IsNull) {\n//            aggValue$87.setField(0, null);\n//        } else {\n//            aggValue$87.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            aggValue$87.setField(1, null);\n//        } else {\n//            aggValue$87.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            aggValue$87.setField(2, null);\n//        } else {\n//            aggValue$87.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            aggValue$87.setField(3, null);\n//        } else {\n//            aggValue$87.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            aggValue$87.setField(4, null);\n//        } else {\n//            aggValue$87.setField(4, agg4_count);\n//        }\n//\n//\n//        return aggValue$87;\n//\n//    }\n//\n//    @Override\n//    public void cleanup(Object ns) throws Exception {\n//        namespace = (java.lang.Long) ns;\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/earlyfire/GroupAggsHandler$210.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.earlyfire;\n\n\n/**\n * {@link org.apache.flink.streaming.api.operators.KeyedProcessOperator}\n */\npublic final class GroupAggsHandler$210 implements org.apache.flink.table.runtime.generated.AggsHandleFunction {\n\n    long agg0_sum;\n    boolean agg0_sumIsNull;\n    long agg0_count;\n    boolean agg0_countIsNull;\n    long agg1_sum;\n    boolean agg1_sumIsNull;\n    long agg1_count;\n    boolean agg1_countIsNull;\n    private transient org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction\n            function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$105;\n    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$106;\n    private org.apache.flink.table.runtime.dataview.StateMapView agg2$map_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData agg2$map_dataview_raw_value;\n    private org.apache.flink.table.runtime.dataview.StateMapView agg2$map_dataview_backup;\n    private org.apache.flink.table.data.binary.BinaryRawValueData agg2$map_dataview_backup_raw_value;\n    private transient org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction\n            function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396;\n    private org.apache.flink.table.runtime.dataview.StateMapView agg3$map_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData agg3$map_dataview_raw_value;\n    private org.apache.flink.table.runtime.dataview.StateMapView agg3$map_dataview_backup;\n    private org.apache.flink.table.data.binary.BinaryRawValueData agg3$map_dataview_backup_raw_value;\n    long agg4_sum;\n    boolean agg4_sumIsNull;\n    long agg4_count;\n    boolean agg4_countIsNull;\n    private org.apache.flink.table.runtime.dataview.StateMapView agg5$map_dataview;\n    private org.apache.flink.table.data.binary.BinaryRawValueData agg5$map_dataview_raw_value;\n    private org.apache.flink.table.runtime.dataview.StateMapView agg5$map_dataview_backup;\n    private org.apache.flink.table.data.binary.BinaryRawValueData agg5$map_dataview_backup_raw_value;\n    long agg6_count1;\n    boolean agg6_count1IsNull;\n    private transient org.apache.flink.table.data.conversion.StructuredObjectConverter converter$107;\n    private transient org.apache.flink.table.data.conversion.StructuredObjectConverter converter$109;\n    org.apache.flink.table.data.GenericRowData acc$112 = new org.apache.flink.table.data.GenericRowData(10);\n    org.apache.flink.table.data.GenericRowData acc$113 = new org.apache.flink.table.data.GenericRowData(10);\n    org.apache.flink.table.data.UpdatableRowData field$119;\n    private org.apache.flink.table.data.RowData agg2_acc_internal;\n    private org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator\n            agg2_acc_external;\n    org.apache.flink.table.data.UpdatableRowData field$121;\n    private org.apache.flink.table.data.RowData agg3_acc_internal;\n    private org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction.MinWithRetractAccumulator\n            agg3_acc_external;\n    org.apache.flink.table.data.UpdatableRowData field$125;\n    private org.apache.flink.table.data.RowData agg5_acc_internal;\n    private org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator\n            agg5_acc_external;\n    org.apache.flink.table.data.GenericRowData aggValue$209 = new org.apache.flink.table.data.GenericRowData(6);\n\n    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n\n    public GroupAggsHandler$210(java.lang.Object[] references) throws Exception {\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448 =\n                (((org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction) references[0]));\n        externalSerializer$105 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n        externalSerializer$106 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[2]));\n        function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396 =\n                (((org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction) references[3]));\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448 =\n                (((org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction) references[4]));\n        converter$107 = (((org.apache.flink.table.data.conversion.StructuredObjectConverter) references[5]));\n        converter$109 = (((org.apache.flink.table.data.conversion.StructuredObjectConverter) references[6]));\n    }\n\n    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n        return store.getRuntimeContext();\n    }\n\n    @Override\n    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n        this.store = store;\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                .open(new org.apache.flink.table.functions.FunctionContext(store.getRuntimeContext()));\n\n\n        agg2$map_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"agg2$map\", false, externalSerializer$105, externalSerializer$106);\n        agg2$map_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(agg2$map_dataview);\n\n\n        agg2$map_dataview_backup = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"agg2$map\", false, externalSerializer$105, externalSerializer$106);\n        agg2$map_dataview_backup_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(agg2$map_dataview_backup);\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                .open(new org.apache.flink.table.functions.FunctionContext(store.getRuntimeContext()));\n\n\n        agg3$map_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"agg3$map\", false, externalSerializer$105, externalSerializer$106);\n        agg3$map_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(agg3$map_dataview);\n\n\n        agg3$map_dataview_backup = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"agg3$map\", false, externalSerializer$105, externalSerializer$106);\n        agg3$map_dataview_backup_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(agg3$map_dataview_backup);\n\n\n        agg5$map_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"agg5$map\", false, externalSerializer$105, externalSerializer$106);\n        agg5$map_dataview_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(agg5$map_dataview);\n\n\n        agg5$map_dataview_backup = (org.apache.flink.table.runtime.dataview.StateMapView) store\n                .getStateMapView(\"agg5$map\", false, externalSerializer$105, externalSerializer$106);\n        agg5$map_dataview_backup_raw_value =\n                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(agg5$map_dataview_backup);\n\n\n        converter$107.open(getRuntimeContext().getUserCodeClassLoader());\n\n\n        converter$109.open(getRuntimeContext().getUserCodeClassLoader());\n\n    }\n\n    @Override\n    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n\n        long field$127;\n        boolean isNull$127;\n        boolean isNull$128;\n        long result$129;\n        boolean isNull$132;\n        long result$133;\n        long field$135;\n        boolean isNull$135;\n        boolean isNull$136;\n        long result$137;\n        boolean isNull$140;\n        long result$141;\n        long field$143;\n        boolean isNull$143;\n        long field$144;\n        boolean isNull$144;\n        long field$145;\n        boolean isNull$145;\n        boolean isNull$146;\n        long result$147;\n        boolean isNull$150;\n        long result$151;\n        long field$153;\n        boolean isNull$153;\n        boolean isNull$154;\n        long result$155;\n        isNull$144 = accInput.isNullAt(5);\n        field$144 = -1L;\n        if (!isNull$144) {\n            field$144 = accInput.getLong(5);\n        }\n        isNull$127 = accInput.isNullAt(2);\n        field$127 = -1L;\n        if (!isNull$127) {\n            field$127 = accInput.getLong(2);\n        }\n        isNull$143 = accInput.isNullAt(4);\n        field$143 = -1L;\n        if (!isNull$143) {\n            field$143 = accInput.getLong(4);\n        }\n        isNull$135 = accInput.isNullAt(3);\n        field$135 = -1L;\n        if (!isNull$135) {\n            field$135 = accInput.getLong(3);\n        }\n        isNull$145 = accInput.isNullAt(6);\n        field$145 = -1L;\n        if (!isNull$145) {\n            field$145 = accInput.getLong(6);\n        }\n        isNull$153 = accInput.isNullAt(1);\n        field$153 = -1L;\n        if (!isNull$153) {\n            field$153 = accInput.getLong(1);\n        }\n\n\n        long result$131 = -1L;\n        boolean isNull$131;\n        if (isNull$127) {\n\n            isNull$131 = agg0_sumIsNull;\n            if (!isNull$131) {\n                result$131 = agg0_sum;\n            }\n        } else {\n            long result$130 = -1L;\n            boolean isNull$130;\n            if (agg0_sumIsNull) {\n\n                isNull$130 = isNull$127;\n                if (!isNull$130) {\n                    result$130 = field$127;\n                }\n            } else {\n\n\n                isNull$128 = agg0_sumIsNull || isNull$127;\n                result$129 = -1L;\n                if (!isNull$128) {\n\n                    result$129 = (long) (agg0_sum + field$127);\n\n                }\n\n                isNull$130 = isNull$128;\n                if (!isNull$130) {\n                    result$130 = result$129;\n                }\n            }\n            isNull$131 = isNull$130;\n            if (!isNull$131) {\n                result$131 = result$130;\n            }\n        }\n        agg0_sum = result$131;\n        ;\n        agg0_sumIsNull = isNull$131;\n\n\n        long result$134 = -1L;\n        boolean isNull$134;\n        if (isNull$127) {\n\n            isNull$134 = agg0_countIsNull;\n            if (!isNull$134) {\n                result$134 = agg0_count;\n            }\n        } else {\n\n\n            isNull$132 = agg0_countIsNull || false;\n            result$133 = -1L;\n            if (!isNull$132) {\n\n                result$133 = (long) (agg0_count + ((long) 1L));\n\n            }\n\n            isNull$134 = isNull$132;\n            if (!isNull$134) {\n                result$134 = result$133;\n            }\n        }\n        agg0_count = result$134;\n        ;\n        agg0_countIsNull = isNull$134;\n\n\n        long result$139 = -1L;\n        boolean isNull$139;\n        if (isNull$135) {\n\n            isNull$139 = agg1_sumIsNull;\n            if (!isNull$139) {\n                result$139 = agg1_sum;\n            }\n        } else {\n            long result$138 = -1L;\n            boolean isNull$138;\n            if (agg1_sumIsNull) {\n\n                isNull$138 = isNull$135;\n                if (!isNull$138) {\n                    result$138 = field$135;\n                }\n            } else {\n\n\n                isNull$136 = agg1_sumIsNull || isNull$135;\n                result$137 = -1L;\n                if (!isNull$136) {\n\n                    result$137 = (long) (agg1_sum + field$135);\n\n                }\n\n                isNull$138 = isNull$136;\n                if (!isNull$138) {\n                    result$138 = result$137;\n                }\n            }\n            isNull$139 = isNull$138;\n            if (!isNull$139) {\n                result$139 = result$138;\n            }\n        }\n        agg1_sum = result$139;\n        ;\n        agg1_sumIsNull = isNull$139;\n\n\n        long result$142 = -1L;\n        boolean isNull$142;\n        if (isNull$135) {\n\n            isNull$142 = agg1_countIsNull;\n            if (!isNull$142) {\n                result$142 = agg1_count;\n            }\n        } else {\n\n\n            isNull$140 = agg1_countIsNull || false;\n            result$141 = -1L;\n            if (!isNull$140) {\n\n                result$141 = (long) (agg1_count + ((long) 1L));\n\n            }\n\n            isNull$142 = isNull$140;\n            if (!isNull$142) {\n                result$142 = result$141;\n            }\n        }\n        agg1_count = result$142;\n        ;\n        agg1_countIsNull = isNull$142;\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                .accumulate(agg2_acc_external, isNull$143 ? null : ((java.lang.Long) field$143));\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                .accumulate(agg3_acc_external, isNull$144 ? null : ((java.lang.Long) field$144));\n\n\n        long result$149 = -1L;\n        boolean isNull$149;\n        if (isNull$145) {\n\n            isNull$149 = agg4_sumIsNull;\n            if (!isNull$149) {\n                result$149 = agg4_sum;\n            }\n        } else {\n            long result$148 = -1L;\n            boolean isNull$148;\n            if (agg4_sumIsNull) {\n\n                isNull$148 = isNull$145;\n                if (!isNull$148) {\n                    result$148 = field$145;\n                }\n            } else {\n\n\n                isNull$146 = agg4_sumIsNull || isNull$145;\n                result$147 = -1L;\n                if (!isNull$146) {\n\n                    result$147 = (long) (agg4_sum + field$145);\n\n                }\n\n                isNull$148 = isNull$146;\n                if (!isNull$148) {\n                    result$148 = result$147;\n                }\n            }\n            isNull$149 = isNull$148;\n            if (!isNull$149) {\n                result$149 = result$148;\n            }\n        }\n        agg4_sum = result$149;\n        ;\n        agg4_sumIsNull = isNull$149;\n\n\n        long result$152 = -1L;\n        boolean isNull$152;\n        if (isNull$145) {\n\n            isNull$152 = agg4_countIsNull;\n            if (!isNull$152) {\n                result$152 = agg4_count;\n            }\n        } else {\n\n\n            isNull$150 = agg4_countIsNull || false;\n            result$151 = -1L;\n            if (!isNull$150) {\n\n                result$151 = (long) (agg4_count + ((long) 1L));\n\n            }\n\n            isNull$152 = isNull$150;\n            if (!isNull$152) {\n                result$152 = result$151;\n            }\n        }\n        agg4_count = result$152;\n        ;\n        agg4_countIsNull = isNull$152;\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                .accumulate(agg5_acc_external, isNull$153 ? null : ((java.lang.Long) field$153));\n\n\n        isNull$154 = agg6_count1IsNull || false;\n        result$155 = -1L;\n        if (!isNull$154) {\n\n            result$155 = (long) (agg6_count1 + ((long) 1L));\n\n        }\n\n        agg6_count1 = result$155;\n        ;\n        agg6_count1IsNull = isNull$154;\n\n\n    }\n\n    @Override\n    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n\n        long field$156;\n        boolean isNull$156;\n        boolean isNull$157;\n        long result$158;\n        boolean isNull$159;\n        long result$160;\n        boolean isNull$163;\n        long result$164;\n        long field$166;\n        boolean isNull$166;\n        boolean isNull$167;\n        long result$168;\n        boolean isNull$169;\n        long result$170;\n        boolean isNull$173;\n        long result$174;\n        long field$176;\n        boolean isNull$176;\n        long field$177;\n        boolean isNull$177;\n        long field$178;\n        boolean isNull$178;\n        boolean isNull$179;\n        long result$180;\n        boolean isNull$181;\n        long result$182;\n        boolean isNull$185;\n        long result$186;\n        long field$188;\n        boolean isNull$188;\n        boolean isNull$189;\n        long result$190;\n        isNull$166 = retractInput.isNullAt(3);\n        field$166 = -1L;\n        if (!isNull$166) {\n            field$166 = retractInput.getLong(3);\n        }\n        isNull$156 = retractInput.isNullAt(2);\n        field$156 = -1L;\n        if (!isNull$156) {\n            field$156 = retractInput.getLong(2);\n        }\n        isNull$178 = retractInput.isNullAt(6);\n        field$178 = -1L;\n        if (!isNull$178) {\n            field$178 = retractInput.getLong(6);\n        }\n        isNull$176 = retractInput.isNullAt(4);\n        field$176 = -1L;\n        if (!isNull$176) {\n            field$176 = retractInput.getLong(4);\n        }\n        isNull$177 = retractInput.isNullAt(5);\n        field$177 = -1L;\n        if (!isNull$177) {\n            field$177 = retractInput.getLong(5);\n        }\n        isNull$188 = retractInput.isNullAt(1);\n        field$188 = -1L;\n        if (!isNull$188) {\n            field$188 = retractInput.getLong(1);\n        }\n\n\n        long result$162 = -1L;\n        boolean isNull$162;\n        if (isNull$156) {\n\n            isNull$162 = agg0_sumIsNull;\n            if (!isNull$162) {\n                result$162 = agg0_sum;\n            }\n        } else {\n            long result$161 = -1L;\n            boolean isNull$161;\n            if (agg0_sumIsNull) {\n\n\n                isNull$157 = false || isNull$156;\n                result$158 = -1L;\n                if (!isNull$157) {\n\n                    result$158 = (long) (((long) 0L) - field$156);\n\n                }\n\n                isNull$161 = isNull$157;\n                if (!isNull$161) {\n                    result$161 = result$158;\n                }\n            } else {\n\n\n                isNull$159 = agg0_sumIsNull || isNull$156;\n                result$160 = -1L;\n                if (!isNull$159) {\n\n                    result$160 = (long) (agg0_sum - field$156);\n\n                }\n\n                isNull$161 = isNull$159;\n                if (!isNull$161) {\n                    result$161 = result$160;\n                }\n            }\n            isNull$162 = isNull$161;\n            if (!isNull$162) {\n                result$162 = result$161;\n            }\n        }\n        agg0_sum = result$162;\n        ;\n        agg0_sumIsNull = isNull$162;\n\n\n        long result$165 = -1L;\n        boolean isNull$165;\n        if (isNull$156) {\n\n            isNull$165 = agg0_countIsNull;\n            if (!isNull$165) {\n                result$165 = agg0_count;\n            }\n        } else {\n\n\n            isNull$163 = agg0_countIsNull || false;\n            result$164 = -1L;\n            if (!isNull$163) {\n\n                result$164 = (long) (agg0_count - ((long) 1L));\n\n            }\n\n            isNull$165 = isNull$163;\n            if (!isNull$165) {\n                result$165 = result$164;\n            }\n        }\n        agg0_count = result$165;\n        ;\n        agg0_countIsNull = isNull$165;\n\n\n        long result$172 = -1L;\n        boolean isNull$172;\n        if (isNull$166) {\n\n            isNull$172 = agg1_sumIsNull;\n            if (!isNull$172) {\n                result$172 = agg1_sum;\n            }\n        } else {\n            long result$171 = -1L;\n            boolean isNull$171;\n            if (agg1_sumIsNull) {\n\n\n                isNull$167 = false || isNull$166;\n                result$168 = -1L;\n                if (!isNull$167) {\n\n                    result$168 = (long) (((long) 0L) - field$166);\n\n                }\n\n                isNull$171 = isNull$167;\n                if (!isNull$171) {\n                    result$171 = result$168;\n                }\n            } else {\n\n\n                isNull$169 = agg1_sumIsNull || isNull$166;\n                result$170 = -1L;\n                if (!isNull$169) {\n\n                    result$170 = (long) (agg1_sum - field$166);\n\n                }\n\n                isNull$171 = isNull$169;\n                if (!isNull$171) {\n                    result$171 = result$170;\n                }\n            }\n            isNull$172 = isNull$171;\n            if (!isNull$172) {\n                result$172 = result$171;\n            }\n        }\n        agg1_sum = result$172;\n        ;\n        agg1_sumIsNull = isNull$172;\n\n\n        long result$175 = -1L;\n        boolean isNull$175;\n        if (isNull$166) {\n\n            isNull$175 = agg1_countIsNull;\n            if (!isNull$175) {\n                result$175 = agg1_count;\n            }\n        } else {\n\n\n            isNull$173 = agg1_countIsNull || false;\n            result$174 = -1L;\n            if (!isNull$173) {\n\n                result$174 = (long) (agg1_count - ((long) 1L));\n\n            }\n\n            isNull$175 = isNull$173;\n            if (!isNull$175) {\n                result$175 = result$174;\n            }\n        }\n        agg1_count = result$175;\n        ;\n        agg1_countIsNull = isNull$175;\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                .retract(agg2_acc_external, isNull$176 ? null : ((java.lang.Long) field$176));\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                .retract(agg3_acc_external, isNull$177 ? null : ((java.lang.Long) field$177));\n\n\n        long result$184 = -1L;\n        boolean isNull$184;\n        if (isNull$178) {\n\n            isNull$184 = agg4_sumIsNull;\n            if (!isNull$184) {\n                result$184 = agg4_sum;\n            }\n        } else {\n            long result$183 = -1L;\n            boolean isNull$183;\n            if (agg4_sumIsNull) {\n\n\n                isNull$179 = false || isNull$178;\n                result$180 = -1L;\n                if (!isNull$179) {\n\n                    result$180 = (long) (((long) 0L) - field$178);\n\n                }\n\n                isNull$183 = isNull$179;\n                if (!isNull$183) {\n                    result$183 = result$180;\n                }\n            } else {\n\n\n                isNull$181 = agg4_sumIsNull || isNull$178;\n                result$182 = -1L;\n                if (!isNull$181) {\n\n                    result$182 = (long) (agg4_sum - field$178);\n\n                }\n\n                isNull$183 = isNull$181;\n                if (!isNull$183) {\n                    result$183 = result$182;\n                }\n            }\n            isNull$184 = isNull$183;\n            if (!isNull$184) {\n                result$184 = result$183;\n            }\n        }\n        agg4_sum = result$184;\n        ;\n        agg4_sumIsNull = isNull$184;\n\n\n        long result$187 = -1L;\n        boolean isNull$187;\n        if (isNull$178) {\n\n            isNull$187 = agg4_countIsNull;\n            if (!isNull$187) {\n                result$187 = agg4_count;\n            }\n        } else {\n\n\n            isNull$185 = agg4_countIsNull || false;\n            result$186 = -1L;\n            if (!isNull$185) {\n\n                result$186 = (long) (agg4_count - ((long) 1L));\n\n            }\n\n            isNull$187 = isNull$185;\n            if (!isNull$187) {\n                result$187 = result$186;\n            }\n        }\n        agg4_count = result$187;\n        ;\n        agg4_countIsNull = isNull$187;\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                .retract(agg5_acc_external, isNull$188 ? null : ((java.lang.Long) field$188));\n\n\n        isNull$189 = agg6_count1IsNull || false;\n        result$190 = -1L;\n        if (!isNull$189) {\n\n            result$190 = (long) (agg6_count1 - ((long) 1L));\n\n        }\n\n        agg6_count1 = result$190;\n        ;\n        agg6_count1IsNull = isNull$189;\n\n\n    }\n\n    @Override\n    public void merge(org.apache.flink.table.data.RowData otherAcc) throws Exception {\n\n        throw new java.lang.RuntimeException(\"This function not require merge method, but the merge method is called.\");\n\n    }\n\n    @Override\n    public void setAccumulators(org.apache.flink.table.data.RowData acc) throws Exception {\n\n        long field$114;\n        boolean isNull$114;\n        long field$115;\n        boolean isNull$115;\n        long field$116;\n        boolean isNull$116;\n        long field$117;\n        boolean isNull$117;\n        org.apache.flink.table.data.RowData field$118;\n        boolean isNull$118;\n        org.apache.flink.table.data.RowData field$120;\n        boolean isNull$120;\n        long field$122;\n        boolean isNull$122;\n        long field$123;\n        boolean isNull$123;\n        org.apache.flink.table.data.RowData field$124;\n        boolean isNull$124;\n        long field$126;\n        boolean isNull$126;\n        isNull$124 = acc.isNullAt(8);\n        field$124 = null;\n        if (!isNull$124) {\n            field$124 = acc.getRow(8, 3);\n        }\n        field$125 = null;\n        if (!isNull$124) {\n            field$125 = new org.apache.flink.table.data.UpdatableRowData(\n                    field$124,\n                    3);\n\n            agg5$map_dataview_raw_value.setJavaObject(agg5$map_dataview);\n            field$125.setField(2, agg5$map_dataview_raw_value);\n\n        }\n\n        isNull$126 = acc.isNullAt(9);\n        field$126 = -1L;\n        if (!isNull$126) {\n            field$126 = acc.getLong(9);\n        }\n        isNull$122 = acc.isNullAt(6);\n        field$122 = -1L;\n        if (!isNull$122) {\n            field$122 = acc.getLong(6);\n        }\n\n        isNull$118 = acc.isNullAt(4);\n        field$118 = null;\n        if (!isNull$118) {\n            field$118 = acc.getRow(4, 3);\n        }\n        field$119 = null;\n        if (!isNull$118) {\n            field$119 = new org.apache.flink.table.data.UpdatableRowData(\n                    field$118,\n                    3);\n\n            agg2$map_dataview_raw_value.setJavaObject(agg2$map_dataview);\n            field$119.setField(2, agg2$map_dataview_raw_value);\n\n        }\n\n        isNull$123 = acc.isNullAt(7);\n        field$123 = -1L;\n        if (!isNull$123) {\n            field$123 = acc.getLong(7);\n        }\n        isNull$114 = acc.isNullAt(0);\n        field$114 = -1L;\n        if (!isNull$114) {\n            field$114 = acc.getLong(0);\n        }\n        isNull$115 = acc.isNullAt(1);\n        field$115 = -1L;\n        if (!isNull$115) {\n            field$115 = acc.getLong(1);\n        }\n        isNull$117 = acc.isNullAt(3);\n        field$117 = -1L;\n        if (!isNull$117) {\n            field$117 = acc.getLong(3);\n        }\n\n        isNull$120 = acc.isNullAt(5);\n        field$120 = null;\n        if (!isNull$120) {\n            field$120 = acc.getRow(5, 3);\n        }\n        field$121 = null;\n        if (!isNull$120) {\n            field$121 = new org.apache.flink.table.data.UpdatableRowData(\n                    field$120,\n                    3);\n\n            agg3$map_dataview_raw_value.setJavaObject(agg3$map_dataview);\n            field$121.setField(2, agg3$map_dataview_raw_value);\n\n        }\n\n        isNull$116 = acc.isNullAt(2);\n        field$116 = -1L;\n        if (!isNull$116) {\n            field$116 = acc.getLong(2);\n        }\n\n        agg0_sum = field$114;\n        ;\n        agg0_sumIsNull = isNull$114;\n\n\n        agg0_count = field$115;\n        ;\n        agg0_countIsNull = isNull$115;\n\n\n        agg1_sum = field$116;\n        ;\n        agg1_sumIsNull = isNull$116;\n\n\n        agg1_count = field$117;\n        ;\n        agg1_countIsNull = isNull$117;\n\n\n        agg2_acc_internal = field$119;\n        agg2_acc_external =\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) converter$107\n                        .toExternal((org.apache.flink.table.data.RowData) agg2_acc_internal);\n\n\n        agg3_acc_internal = field$121;\n        agg3_acc_external =\n                (org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction.MinWithRetractAccumulator) converter$109\n                        .toExternal((org.apache.flink.table.data.RowData) agg3_acc_internal);\n\n\n        agg4_sum = field$122;\n        ;\n        agg4_sumIsNull = isNull$122;\n\n\n        agg4_count = field$123;\n        ;\n        agg4_countIsNull = isNull$123;\n\n\n        agg5_acc_internal = field$125;\n        agg5_acc_external =\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) converter$107\n                        .toExternal((org.apache.flink.table.data.RowData) agg5_acc_internal);\n\n\n        agg6_count1 = field$126;\n        ;\n        agg6_count1IsNull = isNull$126;\n\n\n    }\n\n    @Override\n    public void resetAccumulators() throws Exception {\n\n\n        agg0_sum = ((long) -1L);\n        agg0_sumIsNull = true;\n\n\n        agg0_count = ((long) 0L);\n        agg0_countIsNull = false;\n\n\n        agg1_sum = ((long) -1L);\n        agg1_sumIsNull = true;\n\n\n        agg1_count = ((long) 0L);\n        agg1_countIsNull = false;\n\n\n        agg2_acc_external =\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                        .createAccumulator();\n        agg2_acc_internal = (org.apache.flink.table.data.RowData) converter$107.toInternalOrNull(\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) agg2_acc_external);\n\n\n        agg3_acc_external =\n                (org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction.MinWithRetractAccumulator) function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                        .createAccumulator();\n        agg3_acc_internal = (org.apache.flink.table.data.RowData) converter$109.toInternalOrNull(\n                (org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction.MinWithRetractAccumulator) agg3_acc_external);\n\n\n        agg4_sum = ((long) -1L);\n        agg4_sumIsNull = true;\n\n\n        agg4_count = ((long) 0L);\n        agg4_countIsNull = false;\n\n\n        agg5_acc_external =\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                        .createAccumulator();\n        agg5_acc_internal = (org.apache.flink.table.data.RowData) converter$107.toInternalOrNull(\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) agg5_acc_external);\n\n\n        agg6_count1 = ((long) 0L);\n        agg6_count1IsNull = false;\n\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n\n\n        acc$113 = new org.apache.flink.table.data.GenericRowData(10);\n\n\n        if (agg0_sumIsNull) {\n            acc$113.setField(0, null);\n        } else {\n            acc$113.setField(0, agg0_sum);\n        }\n\n\n        if (agg0_countIsNull) {\n            acc$113.setField(1, null);\n        } else {\n            acc$113.setField(1, agg0_count);\n        }\n\n\n        if (agg1_sumIsNull) {\n            acc$113.setField(2, null);\n        } else {\n            acc$113.setField(2, agg1_sum);\n        }\n\n\n        if (agg1_countIsNull) {\n            acc$113.setField(3, null);\n        } else {\n            acc$113.setField(3, agg1_count);\n        }\n\n\n        agg2_acc_internal = (org.apache.flink.table.data.RowData) converter$107.toInternalOrNull(\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) agg2_acc_external);\n        if (false) {\n            acc$113.setField(4, null);\n        } else {\n            acc$113.setField(4, agg2_acc_internal);\n        }\n\n\n        agg3_acc_internal = (org.apache.flink.table.data.RowData) converter$109.toInternalOrNull(\n                (org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction.MinWithRetractAccumulator) agg3_acc_external);\n        if (false) {\n            acc$113.setField(5, null);\n        } else {\n            acc$113.setField(5, agg3_acc_internal);\n        }\n\n\n        if (agg4_sumIsNull) {\n            acc$113.setField(6, null);\n        } else {\n            acc$113.setField(6, agg4_sum);\n        }\n\n\n        if (agg4_countIsNull) {\n            acc$113.setField(7, null);\n        } else {\n            acc$113.setField(7, agg4_count);\n        }\n\n\n        agg5_acc_internal = (org.apache.flink.table.data.RowData) converter$107.toInternalOrNull(\n                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) agg5_acc_external);\n        if (false) {\n            acc$113.setField(8, null);\n        } else {\n            acc$113.setField(8, agg5_acc_internal);\n        }\n\n\n        if (agg6_count1IsNull) {\n            acc$113.setField(9, null);\n        } else {\n            acc$113.setField(9, agg6_count1);\n        }\n\n\n        return acc$113;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n\n\n        acc$112 = new org.apache.flink.table.data.GenericRowData(10);\n\n\n        if (true) {\n            acc$112.setField(0, null);\n        } else {\n            acc$112.setField(0, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$112.setField(1, null);\n        } else {\n            acc$112.setField(1, ((long) 0L));\n        }\n\n\n        if (true) {\n            acc$112.setField(2, null);\n        } else {\n            acc$112.setField(2, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$112.setField(3, null);\n        } else {\n            acc$112.setField(3, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.data.RowData acc_internal$108 =\n                (org.apache.flink.table.data.RowData) (org.apache.flink.table.data.RowData) converter$107\n                        .toInternalOrNull(\n                                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                                        .createAccumulator());\n        if (false) {\n            acc$112.setField(4, null);\n        } else {\n            acc$112.setField(4, acc_internal$108);\n        }\n\n\n        org.apache.flink.table.data.RowData acc_internal$110 =\n                (org.apache.flink.table.data.RowData) (org.apache.flink.table.data.RowData) converter$109\n                        .toInternalOrNull(\n                                (org.apache.flink.table.planner.functions.aggfunctions.MinWithRetractAggFunction.MinWithRetractAccumulator) function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                                        .createAccumulator());\n        if (false) {\n            acc$112.setField(5, null);\n        } else {\n            acc$112.setField(5, acc_internal$110);\n        }\n\n\n        if (true) {\n            acc$112.setField(6, null);\n        } else {\n            acc$112.setField(6, ((long) -1L));\n        }\n\n\n        if (false) {\n            acc$112.setField(7, null);\n        } else {\n            acc$112.setField(7, ((long) 0L));\n        }\n\n\n        org.apache.flink.table.data.RowData acc_internal$111 =\n                (org.apache.flink.table.data.RowData) (org.apache.flink.table.data.RowData) converter$107\n                        .toInternalOrNull(\n                                (org.apache.flink.table.planner.functions.aggfunctions.MaxWithRetractAggFunction.MaxWithRetractAccumulator) function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                                        .createAccumulator());\n        if (false) {\n            acc$112.setField(8, null);\n        } else {\n            acc$112.setField(8, acc_internal$111);\n        }\n\n\n        if (false) {\n            acc$112.setField(9, null);\n        } else {\n            acc$112.setField(9, ((long) 0L));\n        }\n\n\n        return acc$112;\n\n    }\n\n    @Override\n    public org.apache.flink.table.data.RowData getValue() throws Exception {\n\n        boolean isNull$191;\n        boolean result$192;\n        boolean isNull$194;\n        boolean result$195;\n        boolean isNull$203;\n        boolean result$204;\n\n        aggValue$209 = new org.apache.flink.table.data.GenericRowData(6);\n\n        isNull$191 = agg0_countIsNull || false;\n        result$192 = false;\n        if (!isNull$191) {\n\n            result$192 = agg0_count == ((long) 0L);\n\n        }\n\n        long result$193 = -1L;\n        boolean isNull$193;\n        if (result$192) {\n\n            isNull$193 = true;\n            if (!isNull$193) {\n                result$193 = ((long) -1L);\n            }\n        } else {\n\n            isNull$193 = agg0_sumIsNull;\n            if (!isNull$193) {\n                result$193 = agg0_sum;\n            }\n        }\n        if (isNull$193) {\n            aggValue$209.setField(0, null);\n        } else {\n            aggValue$209.setField(0, result$193);\n        }\n\n\n        isNull$194 = agg1_countIsNull || false;\n        result$195 = false;\n        if (!isNull$194) {\n\n            result$195 = agg1_count == ((long) 0L);\n\n        }\n\n        long result$196 = -1L;\n        boolean isNull$196;\n        if (result$195) {\n\n            isNull$196 = true;\n            if (!isNull$196) {\n                result$196 = ((long) -1L);\n            }\n        } else {\n\n            isNull$196 = agg1_sumIsNull;\n            if (!isNull$196) {\n                result$196 = agg1_sum;\n            }\n        }\n        if (isNull$196) {\n            aggValue$209.setField(1, null);\n        } else {\n            aggValue$209.setField(1, result$196);\n        }\n\n\n        java.lang.Long value_external$197 = (java.lang.Long)\n                function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                        .getValue(agg2_acc_external);\n        java.lang.Long value_internal$198 =\n                value_external$197;\n        boolean valueIsNull$199 = value_internal$198 == null;\n\n        if (valueIsNull$199) {\n            aggValue$209.setField(2, null);\n        } else {\n            aggValue$209.setField(2, value_internal$198);\n        }\n\n\n        java.lang.Long value_external$200 = (java.lang.Long)\n                function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                        .getValue(agg3_acc_external);\n        java.lang.Long value_internal$201 =\n                value_external$200;\n        boolean valueIsNull$202 = value_internal$201 == null;\n\n        if (valueIsNull$202) {\n            aggValue$209.setField(3, null);\n        } else {\n            aggValue$209.setField(3, value_internal$201);\n        }\n\n\n        isNull$203 = agg4_countIsNull || false;\n        result$204 = false;\n        if (!isNull$203) {\n\n            result$204 = agg4_count == ((long) 0L);\n\n        }\n\n        long result$205 = -1L;\n        boolean isNull$205;\n        if (result$204) {\n\n            isNull$205 = true;\n            if (!isNull$205) {\n                result$205 = ((long) -1L);\n            }\n        } else {\n\n            isNull$205 = agg4_sumIsNull;\n            if (!isNull$205) {\n                result$205 = agg4_sum;\n            }\n        }\n        if (isNull$205) {\n            aggValue$209.setField(4, null);\n        } else {\n            aggValue$209.setField(4, result$205);\n        }\n\n\n        java.lang.Long value_external$206 = (java.lang.Long)\n                function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                        .getValue(agg5_acc_external);\n        java.lang.Long value_internal$207 =\n                value_external$206;\n        boolean valueIsNull$208 = value_internal$207 == null;\n\n        if (valueIsNull$208) {\n            aggValue$209.setField(5, null);\n        } else {\n            aggValue$209.setField(5, value_internal$207);\n        }\n\n\n        return aggValue$209;\n\n    }\n\n    @Override\n    public void cleanup() throws Exception {\n\n        agg2$map_dataview.clear();\n\n\n        agg3$map_dataview.clear();\n\n\n        agg5$map_dataview.clear();\n\n\n    }\n\n    @Override\n    public void close() throws Exception {\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MaxWithRetractAggFunction$d78f624eeff2a86742b5f64899608448\n                .close();\n\n\n        function_org$apache$flink$table$planner$functions$aggfunctions$MinWithRetractAggFunction$00780063e1d540e25ad535dd2f326396\n                .close();\n\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_02_cumulate_window/earlyfire/GroupingWindowAggsHandler$57.java",
    "content": "//package flink.examples.sql._07.query._04_window_agg._02_cumulate_window.earlyfire;\n//\n///**\n// * {@link org.apache.flink.table.runtime.operators.window.AggregateWindowOperator}\n// */\n//public final class GroupingWindowAggsHandler$57\n//        implements\n//        org.apache.flink.table.runtime.generated.NamespaceAggsHandleFunction<org.apache.flink.table.runtime.operators.window.TimeWindow> {\n//\n//    long agg0_count1;\n//    boolean agg0_count1IsNull;\n//    long agg1_sum;\n//    boolean agg1_sumIsNull;\n//    long agg2_max;\n//    boolean agg2_maxIsNull;\n//    long agg3_min;\n//    boolean agg3_minIsNull;\n//    long agg4_count;\n//    boolean agg4_countIsNull;\n//    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$20;\n//    private transient org.apache.flink.table.runtime.typeutils.ExternalSerializer externalSerializer$21;\n//    private org.apache.flink.table.runtime.dataview.StateMapView distinctAcc_0_dataview;\n//    private org.apache.flink.table.data.binary.BinaryRawValueData distinctAcc_0_dataview_raw_value;\n//    private org.apache.flink.table.api.dataview.MapView distinct_view_0;\n//    org.apache.flink.table.data.GenericRowData acc$23 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n//    org.apache.flink.table.data.GenericRowData aggValue$56 = new org.apache.flink.table.data.GenericRowData(9);\n//\n//    private org.apache.flink.table.runtime.dataview.StateDataViewStore store;\n//\n//    private org.apache.flink.table.runtime.operators.window.TimeWindow namespace;\n//\n//    public GroupingWindowAggsHandler$57(Object[] references) throws Exception {\n//        externalSerializer$20 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[0]));\n//        externalSerializer$21 = (((org.apache.flink.table.runtime.typeutils.ExternalSerializer) references[1]));\n//    }\n//\n//    private org.apache.flink.api.common.functions.RuntimeContext getRuntimeContext() {\n//        return store.getRuntimeContext();\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.table.runtime.dataview.StateDataViewStore store) throws Exception {\n//        this.store = store;\n//\n//        distinctAcc_0_dataview = (org.apache.flink.table.runtime.dataview.StateMapView) store\n//                .getStateMapView(\"distinctAcc_0\", true, externalSerializer$20, externalSerializer$21);\n//        distinctAcc_0_dataview_raw_value =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinctAcc_0_dataview);\n//\n//        distinct_view_0 = distinctAcc_0_dataview;\n//    }\n//\n//    @Override\n//    public void accumulate(org.apache.flink.table.data.RowData accInput) throws Exception {\n//\n//        boolean isNull$32;\n//        long result$33;\n//        long field$34;\n//        boolean isNull$34;\n//        boolean isNull$35;\n//        long result$36;\n//        boolean isNull$39;\n//        boolean result$40;\n//        boolean isNull$44;\n//        boolean result$45;\n//        long field$49;\n//        boolean isNull$49;\n//        boolean isNull$51;\n//        long result$52;\n//        isNull$49 = accInput.isNullAt(4);\n//        field$49 = -1L;\n//        if (!isNull$49) {\n//            field$49 = accInput.getLong(4);\n//        }\n//        isNull$34 = accInput.isNullAt(3);\n//        field$34 = -1L;\n//        if (!isNull$34) {\n//            field$34 = accInput.getLong(3);\n//        }\n//\n//\n//        isNull$32 = agg0_count1IsNull || false;\n//        result$33 = -1L;\n//        if (!isNull$32) {\n//\n//            result$33 = (long) (agg0_count1 + ((long) 1L));\n//\n//        }\n//\n//        agg0_count1 = result$33;\n//        ;\n//        agg0_count1IsNull = isNull$32;\n//\n//\n//        long result$38 = -1L;\n//        boolean isNull$38;\n//        if (isNull$34) {\n//\n//            isNull$38 = agg1_sumIsNull;\n//            if (!isNull$38) {\n//                result$38 = agg1_sum;\n//            }\n//        } else {\n//            long result$37 = -1L;\n//            boolean isNull$37;\n//            if (agg1_sumIsNull) {\n//\n//                isNull$37 = isNull$34;\n//                if (!isNull$37) {\n//                    result$37 = field$34;\n//                }\n//            } else {\n//\n//\n//                isNull$35 = agg1_sumIsNull || isNull$34;\n//                result$36 = -1L;\n//                if (!isNull$35) {\n//\n//                    result$36 = (long) (agg1_sum + field$34);\n//\n//                }\n//\n//                isNull$37 = isNull$35;\n//                if (!isNull$37) {\n//                    result$37 = result$36;\n//                }\n//            }\n//            isNull$38 = isNull$37;\n//            if (!isNull$38) {\n//                result$38 = result$37;\n//            }\n//        }\n//        agg1_sum = result$38;\n//        ;\n//        agg1_sumIsNull = isNull$38;\n//\n//\n//        long result$43 = -1L;\n//        boolean isNull$43;\n//        if (isNull$34) {\n//\n//            isNull$43 = agg2_maxIsNull;\n//            if (!isNull$43) {\n//                result$43 = agg2_max;\n//            }\n//        } else {\n//            long result$42 = -1L;\n//            boolean isNull$42;\n//            if (agg2_maxIsNull) {\n//\n//                isNull$42 = isNull$34;\n//                if (!isNull$42) {\n//                    result$42 = field$34;\n//                }\n//            } else {\n//                isNull$39 = isNull$34 || agg2_maxIsNull;\n//                result$40 = false;\n//                if (!isNull$39) {\n//\n//                    result$40 = field$34 > agg2_max;\n//\n//                }\n//\n//                long result$41 = -1L;\n//                boolean isNull$41;\n//                if (result$40) {\n//\n//                    isNull$41 = isNull$34;\n//                    if (!isNull$41) {\n//                        result$41 = field$34;\n//                    }\n//                } else {\n//\n//                    isNull$41 = agg2_maxIsNull;\n//                    if (!isNull$41) {\n//                        result$41 = agg2_max;\n//                    }\n//                }\n//                isNull$42 = isNull$41;\n//                if (!isNull$42) {\n//                    result$42 = result$41;\n//                }\n//            }\n//            isNull$43 = isNull$42;\n//            if (!isNull$43) {\n//                result$43 = result$42;\n//            }\n//        }\n//        agg2_max = result$43;\n//        ;\n//        agg2_maxIsNull = isNull$43;\n//\n//\n//        long result$48 = -1L;\n//        boolean isNull$48;\n//        if (isNull$34) {\n//\n//            isNull$48 = agg3_minIsNull;\n//            if (!isNull$48) {\n//                result$48 = agg3_min;\n//            }\n//        } else {\n//            long result$47 = -1L;\n//            boolean isNull$47;\n//            if (agg3_minIsNull) {\n//\n//                isNull$47 = isNull$34;\n//                if (!isNull$47) {\n//                    result$47 = field$34;\n//                }\n//            } else {\n//                isNull$44 = isNull$34 || agg3_minIsNull;\n//                result$45 = false;\n//                if (!isNull$44) {\n//\n//                    result$45 = field$34 < agg3_min;\n//\n//                }\n//\n//                long result$46 = -1L;\n//                boolean isNull$46;\n//                if (result$45) {\n//\n//                    isNull$46 = isNull$34;\n//                    if (!isNull$46) {\n//                        result$46 = field$34;\n//                    }\n//                } else {\n//\n//                    isNull$46 = agg3_minIsNull;\n//                    if (!isNull$46) {\n//                        result$46 = agg3_min;\n//                    }\n//                }\n//                isNull$47 = isNull$46;\n//                if (!isNull$47) {\n//                    result$47 = result$46;\n//                }\n//            }\n//            isNull$48 = isNull$47;\n//            if (!isNull$48) {\n//                result$48 = result$47;\n//            }\n//        }\n//        agg3_min = result$48;\n//        ;\n//        agg3_minIsNull = isNull$48;\n//\n//\n//        java.lang.Long distinctKey$50 = (java.lang.Long) field$49;\n//        if (isNull$49) {\n//            distinctKey$50 = null;\n//        }\n//\n//        java.lang.Long value$54 = (java.lang.Long) distinct_view_0.get(distinctKey$50);\n//        if (value$54 == null) {\n//            value$54 = 0L;\n//        }\n//\n//        boolean is_distinct_value_changed_0 = false;\n//\n//        long existed$55 = ((long) value$54) & (1L << 0);\n//        if (existed$55 == 0) {  // not existed\n//            value$54 = ((long) value$54) | (1L << 0);\n//            is_distinct_value_changed_0 = true;\n//\n//            long result$53 = -1L;\n//            boolean isNull$53;\n//            if (isNull$49) {\n//\n//                isNull$53 = agg4_countIsNull;\n//                if (!isNull$53) {\n//                    result$53 = agg4_count;\n//                }\n//            } else {\n//\n//\n//                isNull$51 = agg4_countIsNull || false;\n//                result$52 = -1L;\n//                if (!isNull$51) {\n//\n//                    result$52 = (long) (agg4_count + ((long) 1L));\n//\n//                }\n//\n//                isNull$53 = isNull$51;\n//                if (!isNull$53) {\n//                    result$53 = result$52;\n//                }\n//            }\n//            agg4_count = result$53;\n//            ;\n//            agg4_countIsNull = isNull$53;\n//\n//        }\n//\n//        if (is_distinct_value_changed_0) {\n//            distinct_view_0.put(distinctKey$50, value$54);\n//        }\n//\n//\n//    }\n//\n//    @Override\n//    public void retract(org.apache.flink.table.data.RowData retractInput) throws Exception {\n//\n//        throw new java.lang.RuntimeException(\n//                \"This function not require retract method, but the retract method is called.\");\n//\n//    }\n//\n//    @Override\n//    public void merge(Object ns, org.apache.flink.table.data.RowData otherAcc) throws Exception {\n//        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n//\n//        throw new java.lang.RuntimeException(\"This function not require merge method, but the merge method is called.\");\n//\n//    }\n//\n//    @Override\n//    public void setAccumulators(Object ns, org.apache.flink.table.data.RowData acc)\n//            throws Exception {\n//        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n//\n//        long field$26;\n//        boolean isNull$26;\n//        long field$27;\n//        boolean isNull$27;\n//        long field$28;\n//        boolean isNull$28;\n//        long field$29;\n//        boolean isNull$29;\n//        long field$30;\n//        boolean isNull$30;\n//        org.apache.flink.table.data.binary.BinaryRawValueData field$31;\n//        boolean isNull$31;\n//        isNull$30 = acc.isNullAt(4);\n//        field$30 = -1L;\n//        if (!isNull$30) {\n//            field$30 = acc.getLong(4);\n//        }\n//        isNull$26 = acc.isNullAt(0);\n//        field$26 = -1L;\n//        if (!isNull$26) {\n//            field$26 = acc.getLong(0);\n//        }\n//        isNull$27 = acc.isNullAt(1);\n//        field$27 = -1L;\n//        if (!isNull$27) {\n//            field$27 = acc.getLong(1);\n//        }\n//        isNull$29 = acc.isNullAt(3);\n//        field$29 = -1L;\n//        if (!isNull$29) {\n//            field$29 = acc.getLong(3);\n//        }\n//\n//        // when namespace is null, the dataview is used in heap, no key and namespace set\n//        if (namespace != null) {\n//            distinctAcc_0_dataview.setCurrentNamespace(namespace);\n//            distinct_view_0 = distinctAcc_0_dataview;\n//        } else {\n//            isNull$31 = acc.isNullAt(5);\n//            field$31 = null;\n//            if (!isNull$31) {\n//                field$31 = ((org.apache.flink.table.data.binary.BinaryRawValueData) acc.getRawValue(5));\n//            }\n//            distinct_view_0 = (org.apache.flink.table.api.dataview.MapView) field$31.getJavaObject();\n//        }\n//\n//        isNull$28 = acc.isNullAt(2);\n//        field$28 = -1L;\n//        if (!isNull$28) {\n//            field$28 = acc.getLong(2);\n//        }\n//\n//        agg0_count1 = field$26;\n//        ;\n//        agg0_count1IsNull = isNull$26;\n//\n//\n//        agg1_sum = field$27;\n//        ;\n//        agg1_sumIsNull = isNull$27;\n//\n//\n//        agg2_max = field$28;\n//        ;\n//        agg2_maxIsNull = isNull$28;\n//\n//\n//        agg3_min = field$29;\n//        ;\n//        agg3_minIsNull = isNull$29;\n//\n//\n//        agg4_count = field$30;\n//        ;\n//        agg4_countIsNull = isNull$30;\n//\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getAccumulators() throws Exception {\n//\n//\n//        acc$25 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (agg0_count1IsNull) {\n//            acc$25.setField(0, null);\n//        } else {\n//            acc$25.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            acc$25.setField(1, null);\n//        } else {\n//            acc$25.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            acc$25.setField(2, null);\n//        } else {\n//            acc$25.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            acc$25.setField(3, null);\n//        } else {\n//            acc$25.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            acc$25.setField(4, null);\n//        } else {\n//            acc$25.setField(4, agg4_count);\n//        }\n//\n//\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$24 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(distinct_view_0);\n//\n//        if (false) {\n//            acc$25.setField(5, null);\n//        } else {\n//            acc$25.setField(5, distinct_acc$24);\n//        }\n//\n//\n//        return acc$25;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData createAccumulators() throws Exception {\n//\n//\n//        acc$23 = new org.apache.flink.table.data.GenericRowData(6);\n//\n//\n//        if (false) {\n//            acc$23.setField(0, null);\n//        } else {\n//            acc$23.setField(0, ((long) 0L));\n//        }\n//\n//\n//        if (true) {\n//            acc$23.setField(1, null);\n//        } else {\n//            acc$23.setField(1, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$23.setField(2, null);\n//        } else {\n//            acc$23.setField(2, ((long) -1L));\n//        }\n//\n//\n//        if (true) {\n//            acc$23.setField(3, null);\n//        } else {\n//            acc$23.setField(3, ((long) -1L));\n//        }\n//\n//\n//        if (false) {\n//            acc$23.setField(4, null);\n//        } else {\n//            acc$23.setField(4, ((long) 0L));\n//        }\n//\n//\n//        org.apache.flink.table.api.dataview.MapView mapview$22 = new org.apache.flink.table.api.dataview.MapView();\n//        org.apache.flink.table.data.binary.BinaryRawValueData distinct_acc$22 =\n//                org.apache.flink.table.data.binary.BinaryRawValueData.fromObject(mapview$22);\n//\n//        if (false) {\n//            acc$23.setField(5, null);\n//        } else {\n//            acc$23.setField(5, distinct_acc$22);\n//        }\n//\n//\n//        return acc$23;\n//\n//    }\n//\n//    @Override\n//    public org.apache.flink.table.data.RowData getValue(Object ns) throws Exception {\n//        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n//\n//\n//        aggValue$56 = new org.apache.flink.table.data.GenericRowData(9);\n//\n//\n//        if (agg0_count1IsNull) {\n//            aggValue$56.setField(0, null);\n//        } else {\n//            aggValue$56.setField(0, agg0_count1);\n//        }\n//\n//\n//        if (agg1_sumIsNull) {\n//            aggValue$56.setField(1, null);\n//        } else {\n//            aggValue$56.setField(1, agg1_sum);\n//        }\n//\n//\n//        if (agg2_maxIsNull) {\n//            aggValue$56.setField(2, null);\n//        } else {\n//            aggValue$56.setField(2, agg2_max);\n//        }\n//\n//\n//        if (agg3_minIsNull) {\n//            aggValue$56.setField(3, null);\n//        } else {\n//            aggValue$56.setField(3, agg3_min);\n//        }\n//\n//\n//        if (agg4_countIsNull) {\n//            aggValue$56.setField(4, null);\n//        } else {\n//            aggValue$56.setField(4, agg4_count);\n//        }\n//\n//\n//        if (false) {\n//            aggValue$56.setField(5, null);\n//        } else {\n//            aggValue$56.setField(5, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace.getStart()));\n//        }\n//\n//\n//        if (false) {\n//            aggValue$56.setField(6, null);\n//        } else {\n//            aggValue$56.setField(6, org.apache.flink.table.data.TimestampData.fromEpochMillis(namespace.getEnd()));\n//        }\n//\n//\n//        if (false) {\n//            aggValue$56.setField(7, null);\n//        } else {\n//            aggValue$56.setField(7,\n//                    org.apache.flink.table.data.TimestampData.fromEpochMillis(\n//                            namespace.getEnd() - 1)\n//            );\n//        }\n//\n//\n//        if (true) {\n//            aggValue$56.setField(8, null);\n//        } else {\n//            aggValue$56.setField(8, org.apache.flink.table.data.TimestampData.fromEpochMillis(-1L));\n//        }\n//\n//\n//        return aggValue$56;\n//\n//    }\n//\n//    @Override\n//    public void cleanup(Object ns) throws Exception {\n//        namespace = (org.apache.flink.table.runtime.operators.window.TimeWindow) ns;\n//\n//        distinctAcc_0_dataview.setCurrentNamespace(namespace);\n//        distinctAcc_0_dataview.clear();\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_04_window_agg/_03_hop_window/HopWindowGroupWindowAggTest.java",
    "content": "package flink.examples.sql._07.query._04_window_agg._03_hop_window;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class HopWindowGroupWindowAggTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"-- 数据源表，用户购买行为记录表\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    -- 维度数据\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    -- 用户 id\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    -- 用户\\n\"\n                + \"    price BIGINT,\\n\"\n                + \"    -- 事件时间戳\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    -- watermark 设置\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '5',\\n\"\n                + \"  'fields.dim.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"-- 数据汇表\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    dim STRING,\\n\"\n                + \"    pv BIGINT, -- 购买商品数量\\n\"\n                + \"    window_start bigint\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"-- 数据处理逻辑\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select dim,\\n\"\n                + \"\\t   sum(bucket_pv) as pv,\\n\"\n                + \"\\t   window_start\\n\"\n                + \"from (\\n\"\n                + \"\\t SELECT dim,\\n\"\n                + \"\\t \\t    UNIX_TIMESTAMP(CAST(session_start(row_time, interval '1' second) AS STRING)) * 1000 as \"\n                + \"window_start, \\n\"\n                + \"\\t        count(1) as bucket_pv\\n\"\n                + \"\\t FROM source_table\\n\"\n                + \"\\t GROUP BY dim\\n\"\n                + \"\\t\\t\\t  , mod(user_id, 1024)\\n\"\n                + \"              , session(row_time, interval '1' second)\\n\"\n                + \")\\n\"\n                + \"group by dim,\\n\"\n                + \"\\t\\t window_start\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number/RowNumberOrderByBigintTest.java",
    "content": "package flink.examples.sql._07.query._05_over._01_row_number;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class RowNumberOrderByBigintTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 TUMBLE WINDOW 案例\");\n\n        tEnv.getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number/RowNumberOrderByStringTest.java",
    "content": "package flink.examples.sql._07.query._05_over._01_row_number;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class RowNumberOrderByStringTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 TUMBLE WINDOW 案例\");\n\n        tEnv.getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.length' = '1'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       server_timestamp,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number/RowNumberOrderByUnixTimestampTest.java",
    "content": "package flink.examples.sql._07.query._05_over._01_row_number;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class RowNumberOrderByUnixTimestampTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 TUMBLE WINDOW 案例\");\n\n        tEnv.getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp as UNIX_TIMESTAMP()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number/RowNumberWithoutPartitionKeyTest.java",
    "content": "package flink.examples.sql._07.query._05_over._01_row_number;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\n\npublic class RowNumberWithoutPartitionKeyTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 TUMBLE WINDOW 案例\");\n\n        tEnv.getConfig().getConfiguration().setString(\"state.backend\", \"rocksdb\");\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(order by server_timestamp) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number/RowNumberWithoutRowNumberEqual1Test.java",
    "content": "package flink.examples.sql._07.query._05_over._01_row_number;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class RowNumberWithoutRowNumberEqual1Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE FUNCTION mod_udf as 'flink.examples.sql._07.query._05_over._01_row_number.Scalar_UDF';\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select mod_udf(user_id, 1024) as user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          row_number() over(partition by user_id order by proctime) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_01_row_number/Scalar_UDF.java",
    "content": "package flink.examples.sql._07.query._05_over._01_row_number;\n\nimport org.apache.flink.table.functions.FunctionContext;\nimport org.apache.flink.table.functions.ScalarFunction;\n\n\npublic class Scalar_UDF extends ScalarFunction {\n\n    @Override\n    public void open(FunctionContext context) throws Exception {\n        super.open(context);\n\n\n    }\n\n    public int eval(Long id, int remainder) {\n        return (int) (id % remainder);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_02_agg/RangeIntervalProctimeTest.java",
    "content": "package flink.examples.sql._07.query._05_over._02_agg;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class RangeIntervalProctimeTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id BIGINT,\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    order_time as PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '2',\\n\"\n                + \"  'fields.amount.min' = '1',\\n\"\n                + \"  'fields.amount.max' = '10',\\n\"\n                + \"  'fields.product.min' = '1',\\n\"\n                + \"  'fields.product.max' = '2'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    order_time TIMESTAMP(3),\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    one_hour_prod_amount_sum BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT product, order_time, amount,\\n\"\n                + \"  SUM(amount) OVER (\\n\"\n                + \"    PARTITION BY product\\n\"\n                + \"    ORDER BY order_time\\n\"\n                + \"    RANGE BETWEEN INTERVAL '1' HOUR PRECEDING AND CURRENT ROW\\n\"\n                + \"  ) AS one_hour_prod_amount_sum\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_02_agg/RangeIntervalRowtimeAscendingTest.java",
    "content": "package flink.examples.sql._07.query._05_over._02_agg;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class RangeIntervalRowtimeAscendingTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id BIGINT,\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    order_time as cast(CURRENT_TIMESTAMP as TIMESTAMP(3)),\\n\"\n                + \"    WATERMARK FOR order_time AS order_time - INTERVAL '0.001' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '2',\\n\"\n                + \"  'fields.amount.min' = '1',\\n\"\n                + \"  'fields.amount.max' = '10',\\n\"\n                + \"  'fields.product.min' = '1',\\n\"\n                + \"  'fields.product.max' = '2'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    order_time TIMESTAMP(3),\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    one_hour_prod_amount_sum BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT product, order_time, amount,\\n\"\n                + \"  SUM(amount) OVER (\\n\"\n                + \"    PARTITION BY product\\n\"\n                + \"    ORDER BY order_time\\n\"\n                + \"    RANGE BETWEEN INTERVAL '1' HOUR PRECEDING AND CURRENT ROW\\n\"\n                + \"  ) AS one_hour_prod_amount_sum\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_02_agg/RangeIntervalRowtimeBoundedOutOfOrdernessTest.java",
    "content": "package flink.examples.sql._07.query._05_over._02_agg;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class RangeIntervalRowtimeBoundedOutOfOrdernessTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id BIGINT,\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    order_time as cast(CURRENT_TIMESTAMP as TIMESTAMP(3)),\\n\"\n                + \"    WATERMARK FOR order_time AS order_time - INTERVAL '10' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '2',\\n\"\n                + \"  'fields.amount.min' = '1',\\n\"\n                + \"  'fields.amount.max' = '10',\\n\"\n                + \"  'fields.product.min' = '1',\\n\"\n                + \"  'fields.product.max' = '2'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    order_time TIMESTAMP(3),\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    one_hour_prod_amount_sum BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT product, order_time, amount,\\n\"\n                + \"  SUM(amount) OVER (\\n\"\n                + \"    PARTITION BY product\\n\"\n                + \"    ORDER BY order_time\\n\"\n                + \"    RANGE BETWEEN INTERVAL '1' HOUR PRECEDING AND CURRENT ROW\\n\"\n                + \"  ) AS one_hour_prod_amount_sum\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_02_agg/RangeIntervalRowtimeStrictlyAscendingTest.java",
    "content": "package flink.examples.sql._07.query._05_over._02_agg;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class RangeIntervalRowtimeStrictlyAscendingTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id BIGINT,\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    order_time as cast(CURRENT_TIMESTAMP as TIMESTAMP(3)),\\n\"\n                + \"    WATERMARK FOR order_time AS order_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '2',\\n\"\n                + \"  'fields.amount.min' = '1',\\n\"\n                + \"  'fields.amount.max' = '10',\\n\"\n                + \"  'fields.product.min' = '1',\\n\"\n                + \"  'fields.product.max' = '2'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    order_time TIMESTAMP(3),\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    one_hour_prod_amount_sum BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT product, order_time, amount,\\n\"\n                + \"  SUM(amount) OVER (\\n\"\n                + \"    PARTITION BY product\\n\"\n                + \"    ORDER BY order_time\\n\"\n                + \"    RANGE BETWEEN INTERVAL '1' HOUR PRECEDING AND CURRENT ROW\\n\"\n                + \"  ) AS one_hour_prod_amount_sum\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_05_over/_02_agg/RowIntervalTest.java",
    "content": "package flink.examples.sql._07.query._05_over._02_agg;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class RowIntervalTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id BIGINT,\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    order_time as cast(CURRENT_TIMESTAMP as TIMESTAMP(3)),\\n\"\n                + \"    WATERMARK FOR order_time AS order_time - INTERVAL '0.001' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '2',\\n\"\n                + \"  'fields.amount.min' = '1',\\n\"\n                + \"  'fields.amount.max' = '2',\\n\"\n                + \"  'fields.product.min' = '1',\\n\"\n                + \"  'fields.product.max' = '2'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    product BIGINT,\\n\"\n                + \"    order_time TIMESTAMP(3),\\n\"\n                + \"    amount BIGINT,\\n\"\n                + \"    one_hour_prod_amount_sum BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT product, order_time, amount,\\n\"\n                + \"  SUM(amount) OVER (\\n\"\n                + \"    PARTITION BY product\\n\"\n                + \"    ORDER BY order_time\\n\"\n                + \"    ROWS BETWEEN 5 PRECEDING AND CURRENT ROW\\n\"\n                + \"  ) AS one_hour_prod_amount_sum\\n\"\n                + \"FROM source_table\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins/_01_inner_join/ConditionFunction$4.java",
    "content": "package flink.examples.sql._07.query._06_joins._01_regular_joins._01_inner_join;\n\n\npublic class ConditionFunction$4 extends org.apache.flink.api.common.functions.AbstractRichFunction\n        implements org.apache.flink.table.runtime.generated.JoinCondition {\n\n\n    public ConditionFunction$4(Object[] references) throws Exception {\n    }\n\n\n    @Override\n    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n\n    }\n\n    @Override\n    public boolean apply(org.apache.flink.table.data.RowData in1, org.apache.flink.table.data.RowData in2) {\n\n\n        return true;\n    }\n\n    @Override\n    public void close() throws Exception {\n        super.close();\n\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins/_01_inner_join/_01_InnerJoinsTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._01_regular_joins._01_inner_join;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _01_InnerJoinsTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '100'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"  log_id BIGINT,\\n\"\n                + \"  click_params     STRING\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table\\n\"\n                + \"INNER JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins/_01_inner_join/_02_InnerJoinsOnNotEqualTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._01_regular_joins._01_inner_join;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _02_InnerJoinsOnNotEqualTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"  log_id BIGINT,\\n\"\n                + \"  click_params     STRING\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table\\n\"\n                + \"INNER JOIN click_log_table ON show_log_table.log_id > click_log_table.log_id;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins/_02_outer_join/_01_LeftJoinsTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._01_regular_joins._02_outer_join;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _01_LeftJoinsTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '3',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"  log_id BIGINT,\\n\"\n                + \"  click_params     STRING\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '3',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table\\n\"\n                + \"LEFT JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins/_02_outer_join/_02_RightJoinsTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._01_regular_joins._02_outer_join;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _02_RightJoinsTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"  log_id BIGINT,\\n\"\n                + \"  click_params     STRING\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table\\n\"\n                + \"RIGHT JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_01_regular_joins/_02_outer_join/_03_FullJoinsTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._01_regular_joins._02_outer_join;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _03_FullJoinsTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"  log_id BIGINT,\\n\"\n                + \"  click_params     STRING\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '2',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table\\n\"\n                + \"FULL JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_01_proctime/Interval_Full_Joins_ProcesingTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._01_proctime;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Full_Joins_ProcesingTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Join 处理时间案例\");\n        flinkEnv.env().setParallelism(1);\n\n        String exampleSql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table FULL JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.proctime BETWEEN click_log_table.proctime - INTERVAL '4' HOUR AND click_log_table.proctime;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.streaming.api.operators.co.KeyedCoProcessOperator}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.ProcTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_01_proctime/Interval_Inner_Joins_ProcesingTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._01_proctime;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Inner_Joins_ProcesingTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Join 处理时间案例\");\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE dim_table (\\n\"\n                + \"  user_id BIGINT,\\n\"\n                + \"  platform STRING,\\n\"\n                + \"  proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.platform.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    platform STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    s.user_id as user_id,\\n\"\n                + \"    s.name as name,\\n\"\n                + \"    d.platform as platform\\n\"\n                + \"FROM source_table s, dim_table as d\\n\"\n                + \"WHERE s.user_id = d.user_id\\n\"\n                + \"AND s.proctime BETWEEN d.proctime - INTERVAL '4' HOUR AND d.proctime;\";\n\n        String exampleSql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table INNER JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.proctime BETWEEN click_log_table.proctime - INTERVAL '4' HOUR AND click_log_table.proctime;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.streaming.api.operators.co.KeyedCoProcessOperator}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.ProcTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_01_proctime/Interval_Left_Joins_ProcesingTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._01_proctime;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Left_Joins_ProcesingTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Join 处理时间案例\");\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE dim_table (\\n\"\n                + \"  user_id BIGINT,\\n\"\n                + \"  platform STRING,\\n\"\n                + \"  proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.platform.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    platform STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    s.user_id as user_id,\\n\"\n                + \"    s.name as name,\\n\"\n                + \"    d.platform as platform\\n\"\n                + \"FROM source_table s, dim_table as d\\n\"\n                + \"WHERE s.user_id = d.user_id\\n\"\n                + \"AND s.proctime BETWEEN d.proctime - INTERVAL '4' HOUR AND d.proctime;\";\n\n        String exampleSql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table LEFT JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.proctime BETWEEN click_log_table.proctime - INTERVAL '4' HOUR AND click_log_table.proctime;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.streaming.api.operators.co.KeyedCoProcessOperator}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.ProcTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_01_proctime/Interval_Right_Joins_ProcesingTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._01_proctime;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Right_Joins_ProcesingTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Join 处理时间案例\");\n        flinkEnv.env().setParallelism(1);\n\n        String exampleSql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table RIGHT JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.proctime BETWEEN click_log_table.proctime - INTERVAL '4' HOUR AND click_log_table.proctime;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.streaming.api.operators.co.KeyedCoProcessOperator}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.ProcTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_02_row_time/Interval_Full_JoinsOnNotEqual_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._02_row_time;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Full_JoinsOnNotEqual_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table FULL JOIN click_log_table ON show_log_table.log_id > click_log_table.log_id\\n\"\n                + \"AND show_log_table.row_time BETWEEN click_log_table.row_time - INTERVAL '4' HOUR AND click_log_table.row_time;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.RowTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_02_row_time/Interval_Full_Joins_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._02_row_time;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Full_Joins_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n        String sql =\n                \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table FULL JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.row_time BETWEEN click_log_table.row_time - INTERVAL '5' SECOND AND click_log_table.row_time;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.RowTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_02_row_time/Interval_Inner_Joins_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._02_row_time;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n// https://developer.aliyun.com/article/679659\npublic class Interval_Inner_Joins_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table INNER JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.row_time BETWEEN click_log_table.row_time - INTERVAL '4' HOUR AND click_log_table.row_time;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.RowTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_02_row_time/Interval_Left_Joins_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._02_row_time;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Left_Joins_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n        String sql = \"CREATE TABLE show_log (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '5',\\n\"\n                + \"  'fields.log_id.max' = '15'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log.log_id as s_id,\\n\"\n                + \"    show_log.show_params as s_params,\\n\"\n                + \"    click_log.log_id as c_id,\\n\"\n                + \"    click_log.click_params as c_params\\n\"\n                + \"FROM show_log LEFT JOIN click_log ON show_log.log_id = click_log.log_id\\n\"\n                + \"AND show_log.row_time BETWEEN click_log.row_time - INTERVAL '5' SECOND AND click_log.row_time + INTERVAL '5' SECOND;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.RowTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_02_interval_joins/_02_row_time/Interval_Right_Joins_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._02_interval_joins._02_row_time;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Interval_Right_Joins_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table RIGHT JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.row_time BETWEEN click_log_table.row_time - INTERVAL '4' HOUR AND click_log_table.row_time;\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.RowTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_03_temporal_join/_01_proctime/Temporal_Join_ProcesingTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._03_temporal_join._01_proctime;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n// https://developer.aliyun.com/article/679659\npublic class Temporal_Join_ProcesingTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Temporal Join 处理时间案例\");\n        flinkEnv.env().setParallelism(1);\n\n        String exampleSql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log_table.log_id as s_id,\\n\"\n                + \"    show_log_table.show_params as s_params,\\n\"\n                + \"    click_log_table.log_id as c_id,\\n\"\n                + \"    click_log_table.click_params as c_params\\n\"\n                + \"FROM show_log_table FULL JOIN click_log_table ON show_log_table.log_id = click_log_table.log_id\\n\"\n                + \"AND show_log_table.proctime BETWEEN click_log_table.proctime - INTERVAL '4' HOUR AND click_log_table.proctime;\";\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_03_temporal_join/_02_row_time/Temporal_Join_EventTime_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._03_temporal_join._02_row_time;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Temporal_Join_EventTime_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Join 处理时间案例\");\n        flinkEnv.env().setParallelism(1);\n\n        String exampleSql = \"CREATE TABLE show_log (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.show_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE click_log (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    s_id BIGINT,\\n\"\n                + \"    s_params STRING,\\n\"\n                + \"    c_id BIGINT,\\n\"\n                + \"    c_params STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    show_log.log_id as s_id,\\n\"\n                + \"    show_log.show_params as s_params,\\n\"\n                + \"    click_log.log_id as c_id,\\n\"\n                + \"    click_log.click_params as c_params\\n\"\n                + \"FROM show_log FULL JOIN click_log ON show_log.log_id = click_log.log_id\\n\"\n                + \"AND show_log.proctime BETWEEN click_log.proctime - INTERVAL '4' HOUR AND click_log.proctime;\";\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/RedisBatchLookupTest2.java",
    "content": "package flink.examples.sql._07.query._06_joins._04_lookup_join._01_redis;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n/**\n * redis 安装：https://blog.csdn.net/realize_dream/article/details/106227622\n * redis java client：https://www.cnblogs.com/chenyanbin/p/12088796.html\n */\npublic class RedisBatchLookupTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils\n                .getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv().getConfig()\n                .getConfiguration()\n                .setBoolean(\"is.dim.batch.mode\", true);\n\n        String exampleSql = \"CREATE TABLE show_log (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    `timestamp` as cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    user_id STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10000000',\\n\"\n                + \"  'fields.user_id.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE user_profile (\\n\"\n                + \"    user_id STRING,\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING\\n\"\n                + \"    ) WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'lookup.cache.max-rows' = '500',\\n\"\n                + \"  'lookup.cache.ttl' = '3600',\\n\"\n                + \"  'lookup.max-retries' = '1'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    `timestamp` TIMESTAMP(3),\\n\"\n                + \"    user_id STRING,\\n\"\n                + \"    proctime TIMESTAMP(3),\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT \\n\"\n                + \"    s.log_id as log_id\\n\"\n                + \"    , s.`timestamp` as `timestamp`\\n\"\n                + \"    , s.user_id as user_id\\n\"\n                + \"    , s.proctime as proctime\\n\"\n                + \"    , u.sex as sex\\n\"\n                + \"    , u.age as age\\n\"\n                + \"FROM show_log AS s\\n\"\n                + \"LEFT JOIN user_profile FOR SYSTEM_TIME AS OF s.proctime AS u\\n\"\n                + \"ON s.user_id = u.user_id\";\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/RedisDemo.java",
    "content": "package flink.examples.sql._07.query._06_joins._04_lookup_join._01_redis;\n\nimport java.util.HashMap;\nimport java.util.List;\n\nimport com.google.gson.Gson;\n\nimport redis.clients.jedis.Jedis;\nimport redis.clients.jedis.JedisPool;\nimport redis.clients.jedis.Pipeline;\n\n/**\n * redis 安装：https://blog.csdn.net/realize_dream/article/details/106227622\n * redis java client：https://www.cnblogs.com/chenyanbin/p/12088796.html\n */\npublic class RedisDemo {\n\n    public static void main(String[] args) {\n        singleConnect();\n//        poolConnect();\n//        pipeline();\n    }\n\n    public static void singleConnect() {\n        // jedis单实例连接\n        Jedis jedis = new Jedis(\"127.0.0.1\", 6379);\n        String result = jedis.get(\"a\");\n\n        HashMap<String, Object> h = new HashMap<>();\n\n        h.put(\"sex\", \"男\");\n        h.put(\"age\", \"18-24\");\n\n        String s = new Gson().toJson(h);\n\n        jedis.set(\"c\", s);\n\n        System.out.println(result);\n        jedis.close();\n    }\n\n    public static void poolConnect() {\n        //jedis连接池\n        JedisPool pool = new JedisPool(\"127.0.0.1\", 6379);\n        Jedis jedis = pool.getResource();\n        String result = jedis.get(\"a\");\n        System.out.println(result);\n        jedis.close();\n        pool.close();\n    }\n\n    public static void pipeline() {\n        //jedis连接池\n        JedisPool pool = new JedisPool(\"127.0.0.1\", 6379);\n        Jedis jedis = pool.getResource();\n\n        Pipeline pipeline = jedis.pipelined();\n\n        long setStart = System.currentTimeMillis();\n        for (int i = 0; i < 10000; i++) {\n            jedis.set(\"key_\" + i, String.valueOf(i));\n        }\n        long setEnd = System.currentTimeMillis();\n        System.out.println(\"非pipeline操作10000次字符串数据类型set写入，耗时：\" + (setEnd - setStart) + \"毫秒\");\n\n        long pipelineStart = System.currentTimeMillis();\n        for (int i = 0; i < 10000; i++) {\n            pipeline.set(\"key_\" + i, String.valueOf(i));\n        }\n        List<Object> l = pipeline.syncAndReturnAll();\n        long pipelineEnd = System.currentTimeMillis();\n        System.out.println(\"pipeline操作10000次字符串数据类型set写入，耗时：\" + (pipelineEnd - pipelineStart) + \"毫秒\");\n\n\n        String result = jedis.get(\"a\");\n        System.out.println(result);\n        jedis.close();\n        pool.close();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/RedisLookupTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._04_lookup_join._01_redis;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableResult;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\n/**\n * redis 安装：https://blog.csdn.net/realize_dream/article/details/106227622\n * redis java client：https://www.cnblogs.com/chenyanbin/p/12088796.html\n */\npublic class RedisLookupTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(r, Schema.newBuilder()\n                .columnByExpression(\"proctime\", \"PROCTIME()\")\n                .build());\n\n        tEnv.createTemporaryView(\"leftTable\", sourceTable);\n\n        String sql = \"CREATE TABLE dimTable (\\n\"\n                + \"    name STRING,\\n\"\n                + \"    name1 STRING,\\n\"\n                + \"    score BIGINT\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'lookup.cache.max-rows' = '500',\\n\"\n                + \"  'lookup.cache.ttl' = '3600',\\n\"\n                + \"  'lookup.max-retries' = '1'\\n\"\n                + \")\";\n\n        String joinSql = \"SELECT o.f0, o.f1, c.name, c.name1, c.score\\n\"\n                + \"FROM leftTable AS o\\n\"\n                + \"LEFT JOIN dimTable FOR SYSTEM_TIME AS OF o.proctime AS c\\n\"\n                + \"ON o.f0 = c.name\";\n\n        TableResult dimTable = tEnv.executeSql(sql);\n\n        Table t = tEnv.sqlQuery(joinSql);\n\n        //        Table t = tEnv.sqlQuery(\"select * from leftTable\");\n\n        tEnv.toAppendStream(t, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\", \"b\", 1L));\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/RedisLookupTest2.java",
    "content": "package flink.examples.sql._07.query._06_joins._04_lookup_join._01_redis;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n/**\n * redis 安装：https://blog.csdn.net/realize_dream/article/details/106227622\n * redis java client：https://www.cnblogs.com/chenyanbin/p/12088796.html\n */\npublic class RedisLookupTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setBoolean(\"is.dim.batch.mode\", false);\n\n        String sql = \"CREATE TABLE left_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.click_params.length' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE dim_table (\\n\"\n                + \"    name STRING,\\n\"\n                + \"    age BIGINT) WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'lookup.cache.max-rows' = '500',\\n\"\n                + \"  'lookup.cache.ttl' = '3600',\\n\"\n                + \"  'lookup.max-retries' = '1'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    click_params STRING,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    proctime TIMESTAMP(3),\\n\"\n                + \"    d_name STRING,\\n\"\n                + \"    age BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT l.log_id as log_id, l.click_params as click_params, l.name as name, l.proctime as proctime,\"\n                + \" d.name as d_name, d.age as age\\n\"\n                + \"FROM left_table AS l\\n\"\n                + \"LEFT JOIN dim_table FOR SYSTEM_TIME AS OF l.proctime AS d\\n\"\n                + \"ON l.name = d.name\";\n\n        String exampleSql = \"CREATE TABLE show_log (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    `timestamp` as cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    user_id STRING,\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \")\\n\"\n                + \"WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.user_id.length' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE user_profile (\\n\"\n                + \"    user_id STRING,\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING\\n\"\n                + \"    ) WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'lookup.cache.max-rows' = '500',\\n\"\n                + \"  'lookup.cache.ttl' = '3600',\\n\"\n                + \"  'lookup.max-retries' = '1'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    `timestamp` TIMESTAMP(3),\\n\"\n                + \"    user_id STRING,\\n\"\n                + \"    proctime TIMESTAMP(3),\\n\"\n                + \"    age STRING,\\n\"\n                + \"    sex STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT \\n\"\n                + \"    s.log_id as log_id\\n\"\n                + \"    , s.`timestamp` as `timestamp`\\n\"\n                + \"    , s.user_id as user_id\\n\"\n                + \"    , s.proctime as proctime\\n\"\n                + \"    , u.sex as sex\\n\"\n                + \"    , u.age as age\\n\"\n                + \"FROM show_log AS s\\n\"\n                + \"LEFT JOIN user_profile FOR SYSTEM_TIME AS OF s.proctime AS u\\n\"\n                + \"ON s.user_id = u.user_id\";\n\n        Arrays.stream(exampleSql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/BatchJoinTableFuncCollector$8.java",
    "content": "////package flink.examples.sql._07.query._06_joins._04_lookup_join._01_redis.pipeline;\n////\n////\n//import java.util.List;\n//\n//public class BatchJoinTableFuncCollector$8 extends org.apache.flink.table.runtime.collector.TableFunctionCollector {\n//\n//    org.apache.flink.table.data.GenericRowData out = new org.apache.flink.table.data.GenericRowData(2);\n//    org.apache.flink.table.data.utils.JoinedRowData joinedRow$7 = new org.apache.flink.table.data.utils.JoinedRowData();\n//\n//    public BatchJoinTableFuncCollector$8(Object[] references) throws Exception {\n//\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//    }\n//\n//    @Override\n//    public void collect(Object record) throws Exception {\n//        List<org.apache.flink.table.data.RowData> l = (List<org.apache.flink.table.data.RowData>) getInput();\n//        List<org.apache.flink.table.data.RowData> r = (List<org.apache.flink.table.data.RowData>) record;\n//\n//        for (int i = 0; i < l.size(); i++) {\n//\n//            org.apache.flink.table.data.RowData in1 = l.get(i);\n//            org.apache.flink.table.data.RowData in2 = r.get(i);\n//\n//            org.apache.flink.table.data.binary.BinaryStringData field$5;\n//            boolean isNull$5;\n//            long field$6;\n//            boolean isNull$6;\n//            isNull$6 = in2.isNullAt(1);\n//            field$6 = -1L;\n//            if (!isNull$6) {\n//                field$6 = in2.getLong(1);\n//            }\n//            isNull$5 = in2.isNullAt(0);\n//            field$5 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$5) {\n//                field$5 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.getString(0));\n//            }\n//\n//\n//\n//\n//\n//\n//            if (isNull$5) {\n//                out.setField(0, null);\n//            } else {\n//                out.setField(0, field$5);\n//            }\n//\n//\n//\n//            if (isNull$6) {\n//                out.setField(1, null);\n//            } else {\n//                out.setField(1, field$6);\n//            }\n//\n//\n//            joinedRow$7.replace(in1, out);\n//            joinedRow$7.setRowKind(in1.getRowKind());\n//            outputResult(joinedRow$7);\n//        }\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/BatchLookupFunction$4.java",
    "content": "//package flink.examples.sql._07.query._06_joins._04_lookup_join._01_redis.pipeline;\n//\n//\n//import java.util.LinkedList;\n//import java.util.List;\n//\n//public class PipelineLookupFunction$4\n//        extends org.apache.flink.api.common.functions.RichFlatMapFunction {\n//\n//    private transient flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction\n//            function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc;\n//    private TableFunctionResultConverterCollector$2 resultConverterCollector$3 = null;\n//\n//    public PipelineLookupFunction$4(Object[] references) throws Exception {\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc =\n//                (((flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction) references[0]));\n//    }\n//\n//\n//    @Override\n//    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc\n//                .open(new org.apache.flink.table.functions.FunctionContext(getRuntimeContext()));\n//\n//\n//        resultConverterCollector$3 = new TableFunctionResultConverterCollector$2();\n//        resultConverterCollector$3.setRuntimeContext(getRuntimeContext());\n//        resultConverterCollector$3.open(new org.apache.flink.configuration.Configuration());\n//\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc\n//                .setCollector(resultConverterCollector$3);\n//\n//    }\n//\n//    @Override\n//    public void flatMap(Object _in1, org.apache.flink.util.Collector c) throws Exception {\n//        // 改动第一处\n//        List<org.apache.flink.table.data.RowData> in1 = (List<org.apache.flink.table.data.RowData>) _in1;\n//\n//        List<org.apache.flink.table.data.binary.BinaryStringData> list = new LinkedList<>();\n//\n//        for (int i = 0; i < in1.size(); i++) {\n//\n//            org.apache.flink.table.data.binary.BinaryStringData field$0;\n//            boolean isNull$0;\n//            isNull$0 = in1.get(i).isNullAt(2);\n//            field$0 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$0) {\n//                field$0 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.get(i).getString(2));\n//            }\n//\n//            list.add(field$0);\n//        }\n//\n//        resultConverterCollector$3.setCollector(c);\n//\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc\n//                .eval(((List<org.apache.flink.table.data.binary.BinaryStringData>) list));\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc\n//                .close();\n//\n//    }\n//\n//\n//    public class TableFunctionResultConverterCollector$2\n//            extends org.apache.flink.table.runtime.collector.WrappingCollector {\n//\n//\n//        public TableFunctionResultConverterCollector$2() throws Exception {\n//\n//        }\n//\n//        @Override\n//        public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//        }\n//\n//        @Override\n//        public void collect(Object record) throws Exception {\n//            List<org.apache.flink.table.data.RowData> externalResult$1 = (List<org.apache.flink.table.data.RowData>) record;\n//\n//\n//            if (externalResult$1 != null) {\n//                outputResult(externalResult$1);\n//            }\n//\n//        }\n//\n//        @Override\n//        public void close() {\n//            try {\n//\n//            } catch (Exception e) {\n//                throw new RuntimeException(e);\n//            }\n//        }\n//    }\n//\n//}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/JoinTableFuncCollector$8.java",
    "content": "//\n//import java.util.List;\n//\n//public class JoinTableFuncCollector$9 extends org.apache.flink.table.runtime.collector.TableFunctionCollector {\n//\n//    org.apache.flink.table.data.GenericRowData out = new org.apache.flink.table.data.GenericRowData(3);\n//    org.apache.flink.table.data.utils.JoinedRowData joinedRow$8 = new org.apache.flink.table.data.utils.JoinedRowData();\n//\n//    public JoinTableFuncCollector$9(Object[] references) throws Exception {\n//\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//    }\n//\n//    @Override\n//    public void collect(Object record) throws Exception {\n//        List<org.apache.flink.table.data.RowData> l = (List<org.apache.flink.table.data.RowData>) getInput();\n//        List<org.apache.flink.table.data.RowData> r = (List<org.apache.flink.table.data.RowData>) record;\n//        for (int i = 0; i < l.size(); i++) {\n//            org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) l.get(i);\n//            org.apache.flink.table.data.RowData in2 = (org.apache.flink.table.data.RowData) r.get(i);\n//            org.apache.flink.table.data.binary.BinaryStringData field$5;\n//            boolean isNull$5;\n//            org.apache.flink.table.data.binary.BinaryStringData field$6;\n//            boolean isNull$6;\n//            org.apache.flink.table.data.binary.BinaryStringData field$7;\n//            boolean isNull$7;\n//            isNull$7 = in2.isNullAt(2);\n//            field$7 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$7) {\n//                field$7 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.getString(2));\n//            }\n//            isNull$6 = in2.isNullAt(1);\n//            field$6 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$6) {\n//                field$6 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.getString(1));\n//            }\n//            isNull$5 = in2.isNullAt(0);\n//            field$5 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$5) {\n//                field$5 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.getString(0));\n//            }\n//            if (isNull$5) {\n//                out.setField(0, null);\n//            } else {\n//                out.setField(0, field$5);\n//            }\n//            if (isNull$6) {\n//                out.setField(1, null);\n//            } else {\n//                out.setField(1, field$6);\n//            }\n//            if (isNull$7) {\n//                out.setField(2, null);\n//            } else {\n//                out.setField(2, field$7);\n//            }\n//            joinedRow$8.replace(in1, out);\n//            joinedRow$8.setRowKind(in1.getRowKind());\n//            outputResult(joinedRow$8);\n//\n//        }\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}\n//"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/JoinTableFuncCollector$9.java",
    "content": "//\n//import java.util.List;\n//\n//public class JoinTableFuncCollector$9 extends org.apache.flink.table.runtime.collector.TableFunctionCollector {\n//\n//    org.apache.flink.table.data.GenericRowData out = new org.apache.flink.table.data.GenericRowData(2);\n//    org.apache.flink.table.data.utils.JoinedRowData joinedRow$7 = new org.apache.flink.table.data.utils.JoinedRowData();\n//\n//    public JoinTableFuncCollector$9(Object[] references) throws Exception {\n//\n//    }\n//\n//    @Override\n//    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//    }\n//\n//    @Override\n//    public void collect(Object record) throws Exception {\n//        List<org.apache.flink.table.data.RowData> in1 = (List<org.apache.flink.table.data.RowData>) getInput();\n//        List<org.apache.flink.table.data.RowData> in2 = (List<org.apache.flink.table.data.RowData>) record;\n//\n//        for (int i = 0; i < in1.size(); i++) {\n//\n//            org.apache.flink.table.data.binary.BinaryStringData field$5;\n//            boolean isNull$5;\n//            long field$6;\n//            boolean isNull$6;\n//            isNull$6 = in2.get(i).isNullAt(1);\n//            field$6 = -1L;\n//            if (!isNull$6) {\n//                field$6 = in2.get(i).getLong(1);\n//            }\n//            isNull$5 = in2.get(i).isNullAt(0);\n//            field$5 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$5) {\n//                field$5 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.get(i).getString(0));\n//            }\n//\n//\n//\n//\n//\n//\n//            if (isNull$5) {\n//                out.setField(0, null);\n//            } else {\n//                out.setField(0, field$5);\n//            }\n//\n//\n//\n//            if (isNull$6) {\n//                out.setField(1, null);\n//            } else {\n//                out.setField(1, field$6);\n//            }\n//\n//\n//            joinedRow$7.replace(in1.get(i), out);\n//            joinedRow$7.setRowKind(in1.get(i).getRowKind());\n//            outputResult(joinedRow$7);\n//\n//        }\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//    }\n//}\n//"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/LookupFunction$4.java",
    "content": "//\n//public class LookupFunction$4\n//        extends org.apache.flink.api.common.functions.RichFlatMapFunction {\n//\n//    private transient flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$48c9b464341243406b9f0b4a0ba51d1c;\n//    private TableFunctionResultConverterCollector$2 resultConverterCollector$3 = null;\n//\n//    public LookupFunction$4(Object[] references) throws Exception {\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$48c9b464341243406b9f0b4a0ba51d1c =\n//                (((flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction) references[0]));\n//    }\n//\n//\n//\n//    @Override\n//    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$48c9b464341243406b9f0b4a0ba51d1c.open(new org.apache.flink.table.functions.FunctionContext(getRuntimeContext()));\n//\n//\n//        resultConverterCollector$3 = new TableFunctionResultConverterCollector$2();\n//        resultConverterCollector$3.setRuntimeContext(getRuntimeContext());\n//        resultConverterCollector$3.open(new org.apache.flink.configuration.Configuration());\n//\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$48c9b464341243406b9f0b4a0ba51d1c.setCollector(resultConverterCollector$3);\n//\n//    }\n//\n//    @Override\n//    public void flatMap(Object _in1, org.apache.flink.util.Collector c) throws Exception {\n//        org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) _in1;\n//        org.apache.flink.table.data.binary.BinaryStringData field$0;\n//        boolean isNull$0;\n//        isNull$0 = in1.isNullAt(2);\n//        field$0 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//        if (!isNull$0) {\n//            field$0 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(2));\n//        }\n//        resultConverterCollector$3.setCollector(c);\n//        if (isNull$0) {\n//            // skip\n//        } else {\n//            function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$48c9b464341243406b9f0b4a0ba51d1c\n//                    .eval((org.apache.flink.table.data.binary.BinaryStringData) field$0);\n//        }\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$48c9b464341243406b9f0b4a0ba51d1c.close();\n//\n//    }\n//\n//\n//    public class TableFunctionResultConverterCollector$2 extends org.apache.flink.table.runtime.collector.WrappingCollector {\n//\n//\n//\n//        public TableFunctionResultConverterCollector$2() throws Exception {\n//\n//        }\n//\n//        @Override\n//        public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//        }\n//\n//        @Override\n//        public void collect(Object record) throws Exception {\n//            org.apache.flink.table.data.RowData externalResult$1 = (org.apache.flink.table.data.RowData) record;\n//\n//\n//\n//\n//            if (externalResult$1 != null) {\n//                outputResult(externalResult$1);\n//            }\n//\n//        }\n//\n//        @Override\n//        public void close() {\n//            try {\n//\n//            } catch (Exception e) {\n//                throw new RuntimeException(e);\n//            }\n//        }\n//    }\n//\n//}\n//"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/LookupFunction$5.java",
    "content": "//\n//import java.util.LinkedList;\n//import java.util.List;\n//public class LookupFunction$4\n//        extends org.apache.flink.api.common.functions.RichFlatMapFunction {\n//\n//    private transient flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc;\n//    private TableFunctionResultConverterCollector$2 resultConverterCollector$3 = null;\n//\n//    public LookupFunction$4(Object[] references) throws Exception {\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc = (((flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction) references[0]));\n//    }\n//\n//\n//\n//    @Override\n//    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.open(new org.apache.flink.table.functions.FunctionContext(getRuntimeContext()));\n//\n//\n//        resultConverterCollector$3 = new TableFunctionResultConverterCollector$2();\n//        resultConverterCollector$3.setRuntimeContext(getRuntimeContext());\n//        resultConverterCollector$3.open(new org.apache.flink.configuration.Configuration());\n//\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.setCollector(resultConverterCollector$3);\n//\n//    }\n//\n//    @Override\n//    public void flatMap(Object _in1, org.apache.flink.util.Collector c) throws Exception {\n//        List<org.apache.flink.table.data.RowData> l = (List<org.apache.flink.table.data.RowData>) _in1;\n//        List<org.apache.flink.table.data.binary.BinaryStringData> list = new LinkedList<>();\n//        for (int i = 0; i < l.size(); i++) {\n//\n//            org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) l.get(i);\n//\n//            org.apache.flink.table.data.binary.BinaryStringData field$0;\n//            boolean isNull$0;\n//\n//            isNull$0 = in1.isNullAt(2);\n//            field$0 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            if (!isNull$0) {\n//                field$0 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(2));\n//            }\n//\n//            list.add(field$0);\n//        }\n//\n//\n//        resultConverterCollector$3.setCollector(c);\n//\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.eval((List<org.apache.flink.table.data.binary.BinaryStringData>) list);\n//\n//\n//    }\n//\n//    @Override\n//    public void close() throws Exception {\n//\n//        function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.close();\n//\n//    }\n//\n//\n//    public class TableFunctionResultConverterCollector$2 extends org.apache.flink.table.runtime.collector.WrappingCollector {\n//\n//\n//\n//        public TableFunctionResultConverterCollector$2() throws Exception {\n//\n//        }\n//\n//        @Override\n//        public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//\n//        }\n//\n//        @Override\n//        public void collect(Object record) throws Exception {\n//            List<org.apache.flink.table.data.RowData> externalResult$1 = (List<org.apache.flink.table.data.RowData>) record;\n//\n//\n//\n//\n//            if (externalResult$1 != null) {\n//                outputResult(externalResult$1);\n//            }\n//\n//        }\n//\n//        @Override\n//        public void close() {\n//            try {\n//\n//            } catch (Exception e) {\n//                throw new RuntimeException(e);\n//            }\n//        }\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_04_lookup_join/_01_redis/pipeline/T1.java",
    "content": "///* 1 */\n///* 2 */\n//\n//import java.util.LinkedList;\n//import java.util.List;\n//\n///* 3 */\n///* 4 */      public class LookupFunction$4\n//        /* 5 */          extends org.apache.flink.api.common.functions.RichFlatMapFunction {\n//    /* 6 */\n//    /* 7 */        private transient flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc;\n//    /* 8 */        private TableFunctionResultConverterCollector$2 resultConverterCollector$3 = null;\n//    /* 9 */\n//    /* 10 */        public LookupFunction$4(Object[] references) throws Exception {\n//        /* 11 */          function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc = (((flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction) references[0]));\n//        /* 12 */        }\n//    /* 13 */\n//    /* 14 */\n//    /* 15 */\n//    /* 16 */        @Override\n//    /* 17 */        public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//        /* 18 */\n//        /* 19 */          function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.open(new org.apache.flink.table.functions.FunctionContext(getRuntimeContext()));\n//        /* 20 */\n//        /* 21 */\n//        /* 22 */          resultConverterCollector$3 = new TableFunctionResultConverterCollector$2();\n//        /* 23 */          resultConverterCollector$3.setRuntimeContext(getRuntimeContext());\n//        /* 24 */          resultConverterCollector$3.open(new org.apache.flink.configuration.Configuration());\n//        /* 25 */\n//        /* 26 */\n//        /* 27 */          function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.setCollector(resultConverterCollector$3);\n//        /* 28 */\n//        /* 29 */        }\n//    /* 30 */\n//    /* 31 */        @Override\n//    /* 32 */        public void flatMap(Object _in1, org.apache.flink.util.Collector c) throws Exception {\n//        /* 33 */          List<org.apache.flink.table.data.RowData> l = (List<org.apache.flink.table.data.RowData>) _in1;\n//        /* 34 */          List<org.apache.flink.table.data.binary.BinaryStringData> list = new LinkedList<>();\n//        /* 35 */          for (int i = 0; i < l.size(); i++) {\n//            /* 36 */\n//            /* 37 */              org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) l.get(i);\n//            /* 38 */\n//            /* 39 */\n//            /* 40 */              org.apache.flink.table.data.binary.BinaryStringData field$0;\n//            /* 41 */              boolean isNull$0;\n//            /* 42 */\n//            /* 43 */              isNull$0 = in1.isNullAt(2);\n//            /* 44 */              field$0 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\n//            /* 45 */              if (!isNull$0) {\n//                /* 46 */                field$0 = ((org.apache.flink.table.data.binary.BinaryStringData) in1.getString(2));\n//                /* 47 */              }\n//            /* 48 */\n//            /* 49 */              list.add(field$0);\n//            /* 50 */          }\n//        /* 51 */\n//        /* 52 */\n//        /* 53 */          resultConverterCollector$3.setCollector(c);\n//        /* 54 */\n//        /* 55 */\n//        /* 56 */          function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.eval((List<org.apache.flink.table.data.binary.BinaryStringData>) list);\n//        /* 57 */\n//        /* 58 */\n//        /* 59 */        }\n//    /* 60 */\n//    /* 61 */        @Override\n//    /* 62 */        public void close() throws Exception {\n//        /* 63 */\n//        /* 64 */          function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.close();\n//        /* 65 */\n//        /* 66 */        }\n//    /* 67 */\n//    /* 68 */\n//    /* 69 */              public class TableFunctionResultConverterCollector$2 extends org.apache.flink.table.runtime.collector.WrappingCollector {\n//        /* 70 */\n//        /* 71 */\n//        /* 72 */\n//        /* 73 */                public TableFunctionResultConverterCollector$2() throws Exception {\n//            /* 74 */\n//            /* 75 */                }\n//        /* 76 */\n//        /* 77 */                @Override\n//        /* 78 */                public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\n//            /* 79 */\n//            /* 80 */                }\n//        /* 81 */\n//        /* 82 */                @Override\n//        /* 83 */                public void collect(Object record) throws Exception {\n//            /* 84 */                  List<org.apache.flink.table.data.RowData> externalResult$1 = (List<org.apache.flink.table.data.RowData>) record;\n//            /* 85 */\n//            /* 86 */\n//            /* 87 */\n//            /* 88 */\n//            /* 89 */                  if (externalResult$1 != null) {\n//                /* 90 */                    outputResult(externalResult$1);\n//                /* 91 */                  }\n//            /* 92 */\n//            /* 93 */                }\n//        /* 94 */\n//        /* 95 */                @Override\n//        /* 96 */                public void close() {\n//            /* 97 */                  try {\n//                /* 98 */\n//                /* 99 */                  } catch (Exception e) {\n//                /* 100 */                    throw new RuntimeException(e);\n//                /* 101 */                  }\n//            /* 102 */                }\n//        /* 103 */              }\n//    /* 104 */\n//    /* 105 */      }\n///* 106 */"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_05_array_expansion/_01_ArrayExpansionTest.java",
    "content": "package flink.examples.sql._07.query._06_joins._05_array_expansion;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class _01_ArrayExpansionTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params ARRAY<STRING>\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_param STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    log_id,\\n\"\n                + \"    t.show_param as show_param\\n\"\n                + \"FROM show_log_table\\n\"\n                + \"CROSS JOIN UNNEST(show_params) AS t (show_param)\";\n\n\n        String originalSql = \"CREATE TABLE show_log_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params ARRAY<STRING>\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.log_id.min' = '1',\\n\"\n                + \"  'fields.log_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    log_id BIGINT,\\n\"\n                + \"    show_params ARRAY<STRING>\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    log_id,\\n\"\n                + \"    show_params\\n\"\n                + \"FROM show_log_table\\n\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.stream.StreamingJoinOperator}\n          */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_06_table_function/_01_inner_join/TableFunctionInnerJoin_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._06_table_function._01_inner_join;\n\nimport java.util.Arrays;\n\nimport org.apache.flink.table.functions.TableFunction;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TableFunctionInnerJoin_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE FUNCTION user_profile_table_func AS 'flink.examples.sql._07.query._06_joins._06_table_function\"\n                + \"._01_inner_join.TableFunctionInnerJoin_Test$UserProfileTableFunction';\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    age INT,\\n\"\n                + \"    row_time TIMESTAMP(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       age,\\n\"\n                + \"       row_time\\n\"\n                + \"FROM source_table,\\n\"\n                + \"LATERAL TABLE(user_profile_table_func(user_id)) t(age)\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n    public static class UserProfileTableFunction extends TableFunction<Integer> {\n\n        public void eval(long userId) {\n            // 自定义输出逻辑\n            if (userId <= 5) {\n                // 一行转 1 行\n                collect(1);\n            } else {\n                // 一行转 3 行\n                collect(1);\n                collect(2);\n                collect(3);\n            }\n        }\n\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_06_joins/_06_table_function/_01_inner_join/TableFunctionInnerJoin_WithEmptyTableFunction_Test.java",
    "content": "package flink.examples.sql._07.query._06_joins._06_table_function._01_inner_join;\n\nimport java.util.Arrays;\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.table.functions.TableFunction;\n\n\npublic class TableFunctionInnerJoin_WithEmptyTableFunction_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n        String sql = \"CREATE FUNCTION user_profile_table_func AS 'flink.examples.sql._07.query._06_joins._07_table_function\"\n                + \"._01_inner_join.TableFunctionInnerJoin_WithEmptyTableFunction_Test$UserProfile_EmptyTableFunction';\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    age INT,\\n\"\n                + \"    row_time TIMESTAMP(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       age,\\n\"\n                + \"       row_time\\n\"\n                + \"FROM source_table,\\n\"\n                + \"LATERAL TABLE(user_profile_table_func(user_id)) t(age)\";\n\n        /**\n         * join 算子：{@link org.apache.flink.table.runtime.operators.join.KeyedCoProcessOperatorWithWatermarkDelay}\n         *                 -> {@link org.apache.flink.table.runtime.operators.join.interval.RowTimeIntervalJoin}\n         *                       -> {@link org.apache.flink.table.runtime.operators.join.interval.IntervalJoinFunction}\n         */\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(tEnv::executeSql);\n    }\n\n    public static class UserProfile_EmptyTableFunction extends TableFunction<Integer> {\n\n        public void eval(long userId) {\n        }\n\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_07_deduplication/DeduplicationProcessingTimeTest.java",
    "content": "package flink.examples.sql._07.query._07_deduplication;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class DeduplicationProcessingTimeTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       server_timestamp\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by proctime) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        /**\n         * 算子 {@link org.apache.flink.streaming.api.operators.KeyedProcessOperator}\n         *      -- {@link org.apache.flink.table.runtime.operators.deduplicate.ProcTimeDeduplicateKeepFirstRowFunction}\n         */\n\n        for (String innerSql : sql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_07_deduplication/DeduplicationProcessingTimeTest1.java",
    "content": "package flink.examples.sql._07.query._07_deduplication;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class DeduplicationProcessingTimeTest1 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT,\\n\"\n                + \"    rn BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       server_timestamp, rn\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by proctime) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\";\n\n        /**\n         * 算子 {@link org.apache.flink.streaming.api.operators.KeyedProcessOperator}\n         *      -- {@link org.apache.flink.table.runtime.operators.deduplicate.ProcTimeDeduplicateKeepFirstRowFunction}\n         */\n\n        for (String innerSql : sql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_07_deduplication/DeduplicationRowTimeTest.java",
    "content": "package flink.examples.sql._07.query._07_deduplication;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class DeduplicationRowTimeTest {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    level STRING COMMENT '用户等级',\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)) COMMENT '事件时间戳',\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.level.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '1000000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    level STRING COMMENT '等级',\\n\"\n                + \"    uv BIGINT COMMENT '当前等级用户数',\\n\"\n                + \"    row_time timestamp(3) COMMENT '时间戳'\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select \\n\"\n                + \"    level\\n\"\n                + \"    , count(1) as uv\\n\"\n                + \"    , max(row_time) as row_time\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          level,\\n\"\n                + \"          row_time,\\n\"\n                + \"          row_number() over(partition by user_id order by row_time desc) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\\n\"\n                + \"group by \\n\"\n                + \"    level\";\n\n        for (String innerSql : sql.split(\";\")) {\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_08_datastream_trans/AlertExample.java",
    "content": "package flink.examples.sql._07.query._08_datastream_trans;\n\nimport org.apache.flink.api.common.functions.FlatMapFunction;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class AlertExample {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String createTableSql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp_LTZ(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \")\\n\";\n\n        String querySql = \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        flinkEnv.streamTEnv().executeSql(createTableSql);\n\n        Table resultTable = flinkEnv.streamTEnv().sqlQuery(querySql);\n\n        flinkEnv.streamTEnv()\n                .toDataStream(resultTable, Row.class)\n                .flatMap(new FlatMapFunction<Row, Object>() {\n                    @Override\n                    public void flatMap(Row value, Collector<Object> out) throws Exception {\n                        long l = Long.parseLong(String.valueOf(value.getField(\"sum_money\")));\n\n                        if (l > 10000L) {\n                            log.info(\"报警，超过 1w\");\n                        }\n                    }\n                });\n\n        flinkEnv.env().execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_08_datastream_trans/AlertExampleRetract.java",
    "content": "package flink.examples.sql._07.query._08_datastream_trans;\n\nimport org.apache.flink.api.common.functions.FlatMapFunction;\nimport org.apache.flink.api.java.tuple.Tuple2;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class AlertExampleRetract {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String createTableSql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    `time` as cast(CURRENT_TIMESTAMP as bigint) * 1000\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \")\\n\";\n\n        String querySql = \"SELECT max(`time`), \\n\"\n                + \"      sum(money) as sum_money\\n\"\n                + \"FROM source_table\\n\"\n                + \"GROUP BY (`time` + 8 * 3600 * 1000) / (24 * 3600 * 1000)\";\n\n        flinkEnv.streamTEnv().executeSql(createTableSql);\n\n        Table resultTable = flinkEnv.streamTEnv().sqlQuery(querySql);\n\n        flinkEnv.streamTEnv()\n                .toRetractStream(resultTable, Row.class)\n                .flatMap(new FlatMapFunction<Tuple2<Boolean, Row>, Object>() {\n                    @Override\n                    public void flatMap(Tuple2<Boolean, Row> value, Collector<Object> out) throws Exception {\n                        long l = Long.parseLong(String.valueOf(value.f1.getField(\"sum_money\")));\n\n                        if (l > 10000L) {\n                            log.info(\"报警，超过 1w\");\n                        }\n                    }\n                });\n\n        flinkEnv.env().execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_08_datastream_trans/AlertExampleRetractError.java",
    "content": "package flink.examples.sql._07.query._08_datastream_trans;\n\nimport org.apache.flink.api.common.functions.FlatMapFunction;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\nimport org.apache.flink.util.Collector;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport lombok.extern.slf4j.Slf4j;\n\n@Slf4j\npublic class AlertExampleRetractError {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String createTableSql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    `time` as cast(CURRENT_TIMESTAMP as bigint) * 1000\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \")\\n\";\n\n        String querySql = \"SELECT max(`time`), \\n\"\n                + \"      sum(money) as sum_money\\n\"\n                + \"FROM source_table\\n\"\n                + \"GROUP BY (`time` + 8 * 3600 * 1000) / (24 * 3600 * 1000)\";\n\n        flinkEnv.streamTEnv().executeSql(createTableSql);\n\n        Table resultTable = flinkEnv.streamTEnv().sqlQuery(querySql);\n\n        flinkEnv.streamTEnv()\n                .toDataStream(resultTable, Row.class)\n                .flatMap(new FlatMapFunction<Row, Object>() {\n                    @Override\n                    public void flatMap(Row value, Collector<Object> out) throws Exception {\n                        long l = Long.parseLong(String.valueOf(value.getField(\"sum_money\")));\n\n                        if (l > 10000L) {\n                            log.info(\"报警，超过 1w\");\n                        }\n                    }\n                });\n\n        flinkEnv.env().execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_08_datastream_trans/RetractExample.java",
    "content": "//package flink.examples.sql._07.query._08_datastream_trans;\n//\n//import org.apache.flink.api.java.tuple.Tuple2;\n//import org.apache.flink.streaming.api.datastream.DataStream;\n//import org.apache.flink.table.api.Table;\n//import org.apache.flink.types.Row;\n//\n//import flink.examples.FlinkEnvUtils;\n//import flink.examples.FlinkEnvUtils.FlinkEnv;\n//import lombok.extern.slf4j.Slf4j;\n//\n//@Slf4j\n//public class RetractExample {\n//\n//    public static void main(String[] args) throws Exception {\n//\n//        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n//\n//        String createTableSql = \"CREATE TABLE source_table (\\n\"\n//                + \"    id BIGINT,\\n\"\n//                + \"    money BIGINT,\\n\"\n//                + \"    `time` as cast(CURRENT_TIMESTAMP as bigint) * 1000\\n\"\n//                + \") WITH (\\n\"\n//                + \"  'connector' = 'datagen',\\n\"\n//                + \"  'rows-per-second' = '1',\\n\"\n//                + \"  'fields.id.min' = '1',\\n\"\n//                + \"  'fields.id.max' = '100000',\\n\"\n//                + \"  'fields.money.min' = '1',\\n\"\n//                + \"  'fields.money.max' = '100000'\\n\"\n//                + \")\\n\";\n//\n//        String querySql = \"SELECT max(`time`), \\n\"\n//                + \"      sum(money) as sum_money\\n\"\n//                + \"FROM source_table\\n\"\n//                + \"GROUP BY (`time` + 8 * 3600 * 1000) / (24 * 3600 * 1000)\";\n//\n//        flinkEnv.streamTEnv().executeSql(createTableSql);\n//\n//        Table resultTable = flinkEnv.streamTEnv().sqlQuery(querySql);\n//\n//        DataStream<Tuple2<Boolean, Row>> d = flinkEnv.streamTEnv()\n//                .toChangelogStream(resultTable, Row.class);\n//\n//        flinkEnv.streamTEnv().from\n//\n//        flinkEnv.env().execute();\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_08_datastream_trans/Test.java",
    "content": "package flink.examples.sql._07.query._08_datastream_trans;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.Schema;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        DataStream<Row> r = flinkEnv.env().addSource(new UserDefinedSource());\n\n        // 数据源是 DataStream API\n        Table sourceTable = flinkEnv.streamTEnv().fromDataStream(r\n                , Schema\n                        .newBuilder()\n                        .column(\"f0\", \"string\")\n                        .column(\"f1\", \"string\")\n                        .column(\"f2\", \"bigint\")\n                        .columnByExpression(\"proctime\", \"PROCTIME()\")\n                        .build());\n\n        flinkEnv.streamTEnv().createTemporaryView(\"source_table\", sourceTable);\n\n        String selectDistinctSql = \"select distinct f0 from source_table\";\n\n        Table resultTable = flinkEnv.streamTEnv().sqlQuery(selectDistinctSql);\n\n        flinkEnv.streamTEnv().toRetractStream(resultTable, Row.class).print();\n\n        String groupBySql = \"select f0 from source_table group by f0\";\n\n        Table resultTable1 = flinkEnv.streamTEnv().sqlQuery(groupBySql);\n\n        flinkEnv.streamTEnv().toRetractStream(resultTable1, Row.class).print();\n\n        flinkEnv.env().execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_09_set_operations/Except_Test.java",
    "content": "package flink.examples.sql._07.query._09_set_operations;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Except_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table_2 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"Except\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_2\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_09_set_operations/Exist_Test.java",
    "content": "package flink.examples.sql._07.query._09_set_operations;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Exist_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table_2 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"WHERE user_id EXISTS (\\n\"\n                + \"     SELECT user_id\\n\"\n                + \"     FROM source_table_2\\n\"\n                + \")\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_09_set_operations/In_Test.java",
    "content": "package flink.examples.sql._07.query._09_set_operations;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class In_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table_2 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"WHERE user_id in (\\n\"\n                + \"     SELECT user_id\\n\"\n                + \"     FROM source_table_2\\n\"\n                + \")\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_09_set_operations/Intersect_Test.java",
    "content": "package flink.examples.sql._07.query._09_set_operations;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Intersect_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table_2 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"INTERSECT\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_2\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_09_set_operations/UnionAll_Test.java",
    "content": "package flink.examples.sql._07.query._09_set_operations;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class UnionAll_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table_2 (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"UNION ALL\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_2\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_09_set_operations/Union_Test.java",
    "content": "package flink.examples.sql._07.query._09_set_operations;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Union_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table_2 (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"UNION\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_2\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_10_order_by/OrderBy_with_time_attr_Test.java",
    "content": "package flink.examples.sql._07.query._10_order_by;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class OrderBy_with_time_attr_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"Order By row_time, user_id desc\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_10_order_by/OrderBy_without_time_attr_Test.java",
    "content": "package flink.examples.sql._07.query._10_order_by;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class OrderBy_without_time_attr_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        /**\n         * Exception in thread \"main\" org.apache.flink.table.api.TableException: Sort on a non-time-attribute field\n         * is not supported.\n         * \tat org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSort.translateToPlanInternal\n         * \t(StreamExecSort.java:75)\n         * \tat org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)\n         * \tat org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)\n         * \tat org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal\n         * \t(StreamExecSink.java:104)\n         * \tat org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)\n         * \tat org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner\n         * \t.scala:70)\n         * \tat org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner\n         * \t.scala:69)\n         * \tat scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)\n         * \tat scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)\n         * \tat scala.collection.Iterator$class.foreach(Iterator.scala:891)\n         * \tat scala.collection.AbstractIterator.foreach(Iterator.scala:1334)\n         * \tat scala.collection.IterableLike$class.foreach(IterableLike.scala:72)\n         * \tat scala.collection.AbstractIterable.foreach(Iterable.scala:54)\n         * \tat scala.collection.TraversableLike$class.map(TraversableLike.scala:234)\n         * \tat scala.collection.AbstractTraversable.map(Traversable.scala:104)\n         * \tat org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:69)\n         * \tat org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)\n         * \tat org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1518)\n         * \tat org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:740)\n         * \tat org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:856)\n         * \tat org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)\n         * \tat java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)\n         * \tat java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)\n         * \tat flink.examples.sql._07.query._10_order_by.OrderBy_Test.main(OrderBy_Test.java:36)\n         */\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"Order By user_id\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_11_limit/Limit_Test.java",
    "content": "package flink.examples.sql._07.query._11_limit;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Limit_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table_1 (\\n\"\n                + \"    user_id BIGINT NOT NULL,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT user_id\\n\"\n                + \"FROM source_table_1\\n\"\n                + \"Limit 3\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_12_topn/TopN_Test.java",
    "content": "package flink.examples.sql._07.query._12_topn;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TopN_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    name BIGINT NOT NULL,\\n\"\n                + \"    search_cnt BIGINT NOT NULL,\\n\"\n                + \"    key BIGINT NOT NULL,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.min' = '1',\\n\"\n                + \"  'fields.name.max' = '10',\\n\"\n                + \"  'fields.key.min' = '1',\\n\"\n                + \"  'fields.key.max' = '2',\\n\"\n                + \"  'fields.search_cnt.min' = '1000',\\n\"\n                + \"  'fields.search_cnt.max' = '10000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    key BIGINT,\\n\"\n                + \"    name BIGINT,\\n\"\n                + \"    search_cnt BIGINT,\\n\"\n                + \"    `timestamp` TIMESTAMP(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT key, name, search_cnt, row_time as `timestamp`\\n\"\n                + \"FROM (\\n\"\n                + \"   SELECT key, name, search_cnt, row_time, \\n\"\n                + \"     ROW_NUMBER() OVER (PARTITION BY key\\n\"\n                + \"       ORDER BY search_cnt desc) AS rownum\\n\"\n                + \"   FROM source_table)\\n\"\n                + \"WHERE rownum <= 100\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_13_window_topn/WindowTopN_Test.java",
    "content": "package flink.examples.sql._07.query._13_window_topn;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class WindowTopN_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    name BIGINT NOT NULL,\\n\"\n                + \"    search_cnt BIGINT NOT NULL,\\n\"\n                + \"    key BIGINT NOT NULL,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.min' = '1',\\n\"\n                + \"  'fields.name.max' = '10',\\n\"\n                + \"  'fields.key.min' = '1',\\n\"\n                + \"  'fields.key.max' = '2',\\n\"\n                + \"  'fields.search_cnt.min' = '1000',\\n\"\n                + \"  'fields.search_cnt.max' = '10000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    key BIGINT,\\n\"\n                + \"    name BIGINT,\\n\"\n                + \"    search_cnt BIGINT,\\n\"\n                + \"    window_start TIMESTAMP(3),\\n\"\n                + \"    window_end TIMESTAMP(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT key, name, search_cnt, window_start, window_end\\n\"\n                + \"FROM (\\n\"\n                + \"   SELECT key, name, search_cnt, window_start, window_end, \\n\"\n                + \"     ROW_NUMBER() OVER (PARTITION BY window_start, window_end, key\\n\"\n                + \"       ORDER BY search_cnt desc) AS rownum\\n\"\n                + \"   FROM (\\n\"\n                + \"      SELECT window_start, window_end, key, name, max(search_cnt) as search_cnt\\n\"\n                + \"      FROM TABLE(TUMBLE(TABLE source_table, DESCRIPTOR(row_time), INTERVAL '1' MINUTES))\\n\"\n                + \"      GROUP BY window_start, window_end, key, name\\n\"\n                + \"   )\\n\"\n                + \")\\n\"\n                + \"WHERE rownum <= 100\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_14_retract/Retract_Test.java",
    "content": "package flink.examples.sql._07.query._14_retract;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Retract_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by proctime desc) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\\n\"\n                + \"group by user_id\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_15_exec_options/Default_Parallelism_Test.java",
    "content": "package flink.examples.sql._07.query._15_exec_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Default_Parallelism_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setInteger(\"table.exec.resource.default-parallelism\", 8);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by proctime desc) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\\n\"\n                + \"group by user_id\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_15_exec_options/Idle_Timeout_Test.java",
    "content": "package flink.examples.sql._07.query._15_exec_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Idle_Timeout_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.exec.source.idle-timeout\", \"180 s\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by row_time desc) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\\n\"\n                + \"group by user_id\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_15_exec_options/State_Ttl_Test.java",
    "content": "package flink.examples.sql._07.query._15_exec_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class State_Ttl_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.exec.state.ttl\", \"180 s\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by row_time desc) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\\n\"\n                + \"group by user_id\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_16_optimizer_options/Agg_OnePhase_Strategy_window_Test.java",
    "content": "package flink.examples.sql._07.query._16_optimizer_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Agg_OnePhase_Strategy_window_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.agg-phase-strategy\", \"ONE_PHASE\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp_LTZ(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end bigint,\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '60' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_16_optimizer_options/Agg_TwoPhase_Strategy_unbounded_Test.java",
    "content": "package flink.examples.sql._07.query._16_optimizer_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Agg_TwoPhase_Strategy_unbounded_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.agg-phase-strategy\", \"TWO_PHASE\");\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.exec.mini-batch.enabled\", \"true\");\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.exec.mini-batch.allow-latency\", \"60 s\");\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.exec.mini-batch.size\", \"1000000000\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    cnt BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    user_id,\\n\"\n                + \"    count(1) as cnt,\\n\"\n                + \"    max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"FROM source_table\\n\"\n                + \"GROUP BY\\n\"\n                + \"    user_id\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_16_optimizer_options/Agg_TwoPhase_Strategy_window_Test.java",
    "content": "package flink.examples.sql._07.query._16_optimizer_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Agg_TwoPhase_Strategy_window_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.agg-phase-strategy\", \"TWO_PHASE\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp_LTZ(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end bigint,\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"SELECT UNIX_TIMESTAMP(CAST(window_end AS STRING)) * 1000 as window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '60' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_16_optimizer_options/DistinctAgg_Split_One_Distinct_Key_Test.java",
    "content": "package flink.examples.sql._07.query._16_optimizer_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class DistinctAgg_Split_One_Distinct_Key_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.distinct-agg.split.enabled\", \"true\");\n\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.distinct-agg.split.bucket-num\", \"1024\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    uv BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    count(distinct user_id) as uv,\\n\"\n                + \"    max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"FROM source_table\\n\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_16_optimizer_options/DistinctAgg_Split_Two_Distinct_Key_Test.java",
    "content": "package flink.examples.sql._07.query._16_optimizer_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class DistinctAgg_Split_Two_Distinct_Key_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.distinct-agg.split.enabled\", \"true\");\n\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.optimizer.distinct-agg.split.bucket-num\", \"1024\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id_uv BIGINT,\\n\"\n                + \"    name_uv BIGINT,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    count(distinct user_id) as user_id_uv,\\n\"\n                + \"    count(distinct name) as name_uv,\\n\"\n                + \"    max(cast(server_timestamp as bigint)) as server_timestamp\\n\"\n                + \"FROM source_table\\n\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_17_table_options/Dml_Syc_False_Test.java",
    "content": "package flink.examples.sql._07.query._17_table_options;\n\nimport org.apache.flink.table.api.StatementSet;\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Dml_Syc_False_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.dml-sync\", \"false\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS TO_TIMESTAMP_LTZ(cast(UNIX_TIMESTAMP() as bigint) * 1000, 3),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table_1 (\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table_2 (\\n\"\n                + \"    id bigint,\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table_1\\n\"\n                + \"SELECT window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\\n\"\n                + \";\\n\"\n                + \"\\n\"\n                + \"insert into sink_table_2\\n\"\n                + \"SELECT id, \\n\"\n                + \"      window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end, \\n\"\n                + \"        id\\n\"\n                + \";\";\n\n        StatementSet statementSet = flinkEnv.streamTEnv().createStatementSet();\n\n        for (String innerSql : sql.split(\";\")) {\n\n            if (innerSql.contains(\"insert\")) {\n                statementSet.addInsertSql(innerSql);\n            } else {\n                TableResult tableResult = flinkEnv.streamTEnv()\n                        .executeSql(innerSql);\n            }\n        }\n\n        statementSet.execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_17_table_options/Dml_Syc_True_Test.java",
    "content": "package flink.examples.sql._07.query._17_table_options;\n\nimport org.apache.flink.table.api.StatementSet;\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Dml_Syc_True_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.dml-sync\", \"true\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS TO_TIMESTAMP_LTZ(cast(UNIX_TIMESTAMP() as bigint) * 1000, 3),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table_1 (\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table_2 (\\n\"\n                + \"    id bigint,\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table_1\\n\"\n                + \"SELECT window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\\n\"\n                + \";\\n\"\n                + \"\\n\"\n                + \"insert into sink_table_2\\n\"\n                + \"SELECT id, \\n\"\n                + \"      window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end, \\n\"\n                + \"        id\\n\"\n                + \";\";\n\n        StatementSet statementSet = flinkEnv.streamTEnv().createStatementSet();\n\n        for (String innerSql : sql.split(\";\")) {\n\n            if (innerSql.contains(\"insert\")) {\n                statementSet.addInsertSql(innerSql);\n            } else {\n                TableResult tableResult = flinkEnv.streamTEnv()\n                        .executeSql(innerSql);\n            }\n        }\n\n        statementSet.execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_17_table_options/TimeZone_window_Test.java",
    "content": "package flink.examples.sql._07.query._17_table_options;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TimeZone_window_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        flinkEnv.streamTEnv()\n                .getConfig()\n                .getConfiguration()\n                .setString(\"table.local-time-zone\", \"GMT+00:00\");\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    row_time AS TO_TIMESTAMP_LTZ(cast(UNIX_TIMESTAMP() as bigint) * 1000, 3),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"SELECT window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\";\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_07/query/_18_performance_tuning/Count_Distinct_Filter_Test.java",
    "content": "package flink.examples.sql._07.query._18_performance_tuning;\n\nimport org.apache.flink.table.api.StatementSet;\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Count_Distinct_Filter_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    id BIGINT,\\n\"\n                + \"    money BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS TO_TIMESTAMP_LTZ(cast(UNIX_TIMESTAMP() as bigint) * 1000, 3),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.id.min' = '1',\\n\"\n                + \"  'fields.id.max' = '100000',\\n\"\n                + \"  'fields.money.min' = '1',\\n\"\n                + \"  'fields.money.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table_1 (\\n\"\n                + \"    window_end timestamp(3),\\n\"\n                + \"    window_start timestamp(3),\\n\"\n                + \"    sum_money BIGINT,\\n\"\n                + \"    count_distinct_id BIGINT,\\n\"\n                + \"    a_count_distinct_id BIGINT,\\n\"\n                + \"    b_count_distinct_id BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table_1\\n\"\n                + \"SELECT window_end, \\n\"\n                + \"      window_start, \\n\"\n                + \"      sum(money) as sum_money,\\n\"\n                + \"      count(distinct id) as count_distinct_id,\\n\"\n                + \"      count(distinct case when name = 'a' then id else null end) as a_count_distinct_id,\\n\"\n                + \"      count(distinct case when name = 'b' then id else null end) as b_count_distinct_id\\n\"\n                + \"FROM TABLE(CUMULATE(\\n\"\n                + \"         TABLE source_table\\n\"\n                + \"         , DESCRIPTOR(row_time)\\n\"\n                + \"         , INTERVAL '5' SECOND\\n\"\n                + \"         , INTERVAL '1' DAY))\\n\"\n                + \"GROUP BY window_start, \\n\"\n                + \"        window_end\\n\"\n                + \";\";\n\n        StatementSet statementSet = flinkEnv.streamTEnv().createStatementSet();\n\n        for (String innerSql : sql.split(\";\")) {\n\n            if (innerSql.contains(\"insert\")) {\n                statementSet.addInsertSql(innerSql);\n            } else {\n                TableResult tableResult = flinkEnv.streamTEnv()\n                        .executeSql(innerSql);\n            }\n        }\n\n        statementSet.execute();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/Utils.java",
    "content": "package flink.examples.sql._08.batch;\n\nimport java.util.regex.Pattern;\n\npublic class Utils {\n\n    public static String format(String sql) {\n\n        // https://blog.csdn.net/qq_21383435/article/details/82286132\n\n        Pattern p = Pattern.compile(\"(?ms)('(?:''|[^'])*')|--.*?$|/\\\\*.*?\\\\*/|#.*?$|\");\n        return p.matcher(sql).replaceAll(\"$1\");\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_01_ddl/HiveDDLTest.java",
    "content": "package flink.examples.sql._08.batch._01_ddl;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\n\n\n/**\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n */\npublic class HiveDDLTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"myhive\", hive);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"myhive\");\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n//        String createTableSql = \"CREATE TABLE hive_table_1 (\\n\"\n//                + \"    user_id STRING,\\n\"\n//                + \"    order_amount DOUBLE\\n\"\n//                + \") PARTITIONED BY (\\n\"\n//                + \"    p_date STRING\\n\"\n//                + \") STORED AS parquet\";\n\n//        tEnv.executeSql(createTableSql);\n\n        // hive dialect 支持 insert overwrite table\n        // 默认不支持\n        tEnv.executeSql(\"insert overwrite table hive_table_1 select * from hive_table\")\n                .print();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/HiveDMLBetweenAndTest.java",
    "content": "package flink.examples.sql._08.batch._02_dml;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\nimport org.apache.flink.table.module.hive.HiveModule;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveDMLBetweenAndTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"myhive\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"myhive\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        tEnv.loadModule(\"myhive\", new HiveModule(version));\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql = \"select count(1) as uv\\n\"\n                + \"     , sum(part_pv) as pv\\n\"\n                + \"     , max(part_max) as max_no\\n\"\n                + \"     , nvl(min(part_min), 1) as min_no\\n\"\n                + \"from (\\n\"\n                + \"    select user_id\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by user_id\\n\"\n                + \") tmp\";\n\n        tEnv.executeSql(sql)\n                .print();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/HiveDMLTest.java",
    "content": "package flink.examples.sql._08.batch._02_dml;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.hive.HiveModule;\n\n\n/**\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n */\npublic class HiveDMLTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"myhive\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"myhive\");\n\n        String version = \"3.1.2\";\n        tEnv.loadModule(\"myhive\", new HiveModule(version));\n\n        tEnv.executeSql(\"select count(1) as uv\\n\"\n                + \"     , sum(part_pv) as pv\\n\"\n                + \"     , max(part_max) as max_no\\n\"\n                + \"     , nvl(min(part_min), 1) as min_no\\n\"\n                + \"from (\\n\"\n                + \"    select user_id\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date = '20210920'\\n\"\n                + \"    group by user_id\\n\"\n                + \")\")\n                .print();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/HiveTest2.java",
    "content": "package flink.examples.sql._08.batch._02_dml;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\n\n/**\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n */\npublic class HiveTest2 {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"myhive\", hive);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"myhive\");\n\n        tEnv.executeSql(\"select count(1) as uv\\n\"\n                + \"     , sum(part_pv) as pv\\n\"\n                + \"     , max(part_max) as max_no\\n\"\n                + \"     , min(part_min) as min_no\\n\"\n                + \"from (\\n\"\n                + \"    select user_id\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date = '20210920'\\n\"\n                + \"    group by user_id\\n\"\n                + \")\")\n                .print();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_01_hive_dialect/HiveDMLTest.java",
    "content": "package flink.examples.sql._08.batch._02_dml._01_hive_dialect;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\n\n\n/**\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n */\npublic class HiveDMLTest {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(10);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"myhive\", hive);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"myhive\");\n\n        long l = System.currentTimeMillis();\n\n        tEnv.executeSql(\"insert into  values(\" + l + \", '20210923', '00')\")\n                .print();\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_02_with_as/HIveWIthAsTest.java",
    "content": "package flink.examples.sql._08.batch._02_dml._02_with_as;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HIveWIthAsTest {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 = \"\"\n                + \"with tmp as (\"\n                + \"\"\n                + \"select get_json_object(user_id, '$.user_id')\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by get_json_object(user_id, '$.user_id'))\"\n                + \"\\n\"\n                + \"select * from tmp\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_03_substr/HiveSubstrTest.java",
    "content": "package flink.examples.sql._08.batch._02_dml._03_substr;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveSubstrTest {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 = \"\"\n                + \"with tmp as (\"\n                + \"\"\n                + \"select substr(user_id, 1, 10)\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by substr(user_id, 1, 10))\"\n                + \"\\n\"\n                + \"select * from tmp\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_04_tumble_window/Test.java",
    "content": "package flink.examples.sql._08.batch._02_dml._04_tumble_window;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test {\n\n    // CREATE TABLE `hive_tumble_window_table`(\n    //  `user_id` string,\n    //  `order_amount` double,\n    //  `server_timestamp` timestamp\n    //\n    //  )\n    //PARTITIONED BY (\n    //  `p_date` string)\n    //\n    //\n    //insert into hive_tumble_window_table values ('yyc', 300, '2021-09-30 11:22:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:22:58.0', '20210920'), ('yyc', 300, '2021-09-30 11:23:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:24:57.0', '20210920'), ('yyc', 300, '2021-09-30 11:25:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:25:58.0', '20210920')\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 =\n                  \"insert overwrite hive_tumble_window_table_sink\\n\"\n                + \"select TUMBLE_START(server_timestamp, INTERVAL '1' MINUTE) as window_start\\n\"\n                + \"     , count(1) as part_pv\\n\"\n                + \"     , max(order_amount) as part_max\\n\"\n                + \"     , min(order_amount) as part_min\\n\"\n                + \"from hive_tumble_window_table\\n\"\n                + \"where p_date = '20210920'\\n\"\n                + \"group by TUMBLE(server_timestamp, INTERVAL '1' MINUTE)\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_04_tumble_window/Test1.java",
    "content": "package flink.examples.sql._08.batch._02_dml._04_tumble_window;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test1 {\n\n    // CREATE TABLE `hive_tumble_window_table`(\n    //  `user_id` string,\n    //  `order_amount` double,\n    //  `server_timestamp` timestamp\n    //\n    //  )\n    //PARTITIONED BY (\n    //  `p_date` string)\n    //\n    //\n    //insert into hive_tumble_window_table values ('yyc', 300, '2021-09-30 11:22:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:22:58.0', '20210920'), ('yyc', 300, '2021-09-30 11:23:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:24:57.0', '20210920'), ('yyc', 300, '2021-09-30 11:25:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:25:58.0', '20210920')\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 =\n                  \"\"\n                          + \"with tmp as (\\n\"\n                          + \"select cast(server_timestamp as timestamp(3)) as ti, order_amount as order_amount from hive_tumble_window_table\\n\"\n                          + \")\\n\"\n                          + \"\\n\"\n                + \"select  window_start, window_end, count(1) as part_pv\\n\"\n                + \"     , max(order_amount) as part_max\\n\"\n                + \"     , min(order_amount) as part_min\\n\"\n                                          + \"from TABLE(\\n\"\n                                                    + \"    TUMBLE(TABLE tmp, DESCRIPTOR(ti), INTERVAL '1' MINUTES))\\n\"\n//                                          + \"from tmp\\n\";\n//                + \"where p_date = '20210920'\\n\"\n                          + \"group by window_start, window_end\\n\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_04_tumble_window/Test2_BIGINT_SOURCE.java",
    "content": "package flink.examples.sql._08.batch._02_dml._04_tumble_window;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test2_BIGINT_SOURCE {\n\n    // CREATE TABLE `hive_tumble_window_table`(\n    //  `user_id` string,\n    //  `order_amount` double,\n    //  `server_timestamp` timestamp\n    //\n    //  )\n    //PARTITIONED BY (\n    //  `p_date` string)\n    //\n    //\n    //insert into hive_tumble_window_table values ('yyc', 300, '2021-09-30 11:22:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:22:58.0', '20210920'), ('yyc', 300, '2021-09-30 11:23:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:24:57.0', '20210920'), ('yyc', 300, '2021-09-30 11:25:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:25:58.0', '20210920')\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 =\n                  \"\\n\"\n//                          + \"insert overwrite hive_tumble_window_table_sink\\n\"\n                + \"select TUMBLE_START(st, INTERVAL '1' MINUTE) as window_start\\n\"\n                + \"     , count(1) as part_pv\\n\"\n                + \"     , max(order_amount) as part_max\\n\"\n                + \"     , min(order_amount) as part_min\\n\"\n                + \"from (select cast(TO_TIMESTAMP(server_timestamp_bigint, 3) as timestamp(3)) as st, order_amount as order_amount from hive_tumble_window_table_bigint_source where p_date = '20210920') tmp1\\n\"\n                + \"group by TUMBLE(st, INTERVAL '1' MINUTE)\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_04_tumble_window/Test3.java",
    "content": "package flink.examples.sql._08.batch._02_dml._04_tumble_window;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test3 {\n\n    // CREATE TABLE `hive_tumble_window_table`(\n    //  `user_id` string,\n    //  `order_amount` double,\n    //  `server_timestamp` timestamp\n    //\n    //  )\n    //PARTITIONED BY (\n    //  `p_date` string)\n    //\n    //\n    //insert into hive_tumble_window_table values ('yyc', 300, '2021-09-30 11:22:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:22:58.0', '20210920'), ('yyc', 300, '2021-09-30 11:23:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:24:57.0', '20210920'), ('yyc', 300, '2021-09-30 11:25:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:25:58.0', '20210920')\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 =\n                  \"insert overwrite hive_tumble_window_table_bigint_source partition(p_date = '20210921')\"\n//                          + \"with tmp as (\\n\"\n//                          + \"select cast(server_timestamp as timestamp(3)) as ti, order_amount as order_amount from hive_tumble_window_table\\n\"\n//                          + \")\\n\"\n//                          + \"\\n\"\n                + \"select  user_id, order_amount, server_timestamp_bigint, server_timestamp from hive_tumble_window_table_bigint_source\\n\";\n//                                          + \"from tmp\\n\";\n//                + \"where p_date = '20210920'\\n\"\n//                          + \"group by server_timestamp\\n\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_04_tumble_window/Test5.java",
    "content": "package flink.examples.sql._08.batch._02_dml._04_tumble_window;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test5 {\n\n    // CREATE TABLE `hive_tumble_window_table`(\n    //  `user_id` string,\n    //  `order_amount` double,\n    //  `server_timestamp` timestamp\n    //\n    //  )\n    //PARTITIONED BY (\n    //  `p_date` string)\n    //\n    //\n    //insert into hive_tumble_window_table values ('yyc', 300, '2021-09-30 11:22:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:22:58.0', '20210920'), ('yyc', 300, '2021-09-30 11:23:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:24:57.0', '20210920'), ('yyc', 300, '2021-09-30 11:25:57.0', '20210920'), ('yyc', 300,\n    // '2021-09-30 11:25:58.0', '20210920')\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 =\n                  \"select TUMBLE_START(server_timestamp, INTERVAL '1' MINUTE) as window_start\\n\"\n                + \"     , count(1) as part_pv\\n\"\n                + \"     , max(order_amount) as part_max\\n\"\n                + \"     , min(order_amount) as part_min\\n\"\n                + \"from hive_tumble_window_table\\n\"\n                + \"where p_date = '20210920'\\n\"\n                + \"group by TUMBLE(server_timestamp, INTERVAL '1' MINUTE)\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_05_batch_to_datastream/Test.java",
    "content": "package flink.examples.sql._08.batch._02_dml._05_batch_to_datastream;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 = \"\"\n                + \"with tmp as (\"\n                + \"\"\n                + \"select count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \")\\n\"\n                + \"select * from tmp\";\n\n        Table t = tEnv.sqlQuery(sql3);\n\n        tEnv.createTemporaryView(\"test\", t);\n\n        tEnv.executeSql(\"select * from test\")\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_02_dml/_06_select_where/Test.java",
    "content": "package flink.examples.sql._08.batch._02_dml._06_select_where;\n\nimport java.lang.reflect.Field;\nimport java.util.concurrent.TimeUnit;\nimport java.util.function.Supplier;\n\nimport org.apache.calcite.sql.SqlNode;\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.api.internal.TableEnvironmentImpl;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\nimport org.apache.flink.table.planner.delegation.ParserImpl;\nimport org.apache.flink.table.planner.parse.CalciteParser;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n * <p>\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test {\n\n    public static void main(String[] args) throws NoSuchFieldException, IllegalAccessException {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 = \"\"\n                + \"with tmp as (\"\n                + \"\"\n                + \"select count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where mod(cast(order_amount as bigint), 10) = 0 and cast(order_amount as bigint) <> 0\\n\"\n                + \")\\n\"\n                + \"select * from tmp\";\n\n        ParserImpl p = (ParserImpl) ((TableEnvironmentImpl) tEnv).getParser();\n\n        Field f = p.getClass().getDeclaredField(\"calciteParserSupplier\");\n\n        f.setAccessible(true);\n\n        Supplier<CalciteParser> su = (Supplier<CalciteParser>) f.get(p);\n\n        CalciteParser calciteParser = su.get();\n\n        SqlNode s = calciteParser.parse(sql3);\n\n        Table t = tEnv.sqlQuery(sql3);\n\n        tEnv.createTemporaryView(\"test\", t);\n\n        tEnv.executeSql(\"select * from test\")\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/HiveModuleV2.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf;\n\nimport static org.apache.flink.util.Preconditions.checkArgument;\n\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.Set;\n\nimport org.apache.flink.annotation.VisibleForTesting;\nimport org.apache.flink.table.catalog.hive.client.HiveShim;\nimport org.apache.flink.table.catalog.hive.client.HiveShimLoader;\nimport org.apache.flink.table.catalog.hive.factories.HiveFunctionDefinitionFactory;\nimport org.apache.flink.table.functions.FunctionDefinition;\nimport org.apache.flink.table.module.Module;\nimport org.apache.flink.table.module.hive.udf.generic.GenericUDFLegacyGroupingID;\nimport org.apache.flink.table.module.hive.udf.generic.HiveGenericUDFGrouping;\nimport org.apache.flink.util.StringUtils;\nimport org.apache.hadoop.hive.ql.exec.FunctionInfo;\n\npublic class HiveModuleV2 implements Module {\n\n\n    // a set of functions that shouldn't be overridden by HiveModule\n    @VisibleForTesting\n    static final Set<String> BUILT_IN_FUNC_BLACKLIST =\n            Collections.unmodifiableSet(\n                    new HashSet<>(\n                            Arrays.asList(\n                                    \"count\",\n                                    \"cume_dist\",\n                                    \"current_date\",\n                                    \"current_timestamp\",\n                                    \"dense_rank\",\n                                    \"first_value\",\n                                    \"lag\",\n                                    \"last_value\",\n                                    \"lead\",\n                                    \"ntile\",\n                                    \"rank\",\n                                    \"row_number\",\n                                    \"hop\",\n                                    \"hop_end\",\n                                    \"hop_proctime\",\n                                    \"hop_rowtime\",\n                                    \"hop_start\",\n                                    \"percent_rank\",\n                                    \"session\",\n                                    \"session_end\",\n                                    \"session_proctime\",\n                                    \"session_rowtime\",\n                                    \"session_start\",\n                                    \"tumble\",\n                                    \"tumble_end\",\n                                    \"tumble_proctime\",\n                                    \"tumble_rowtime\",\n                                    \"tumble_start\")));\n\n    private final HiveFunctionDefinitionFactory factory;\n    private final String hiveVersion;\n    private final HiveShim hiveShim;\n    private Set<String> functionNames;\n\n    public HiveModuleV2() {\n        this(HiveShimLoader.getHiveVersion());\n    }\n\n    public HiveModuleV2(String hiveVersion) {\n        checkArgument(\n                !StringUtils.isNullOrWhitespaceOnly(hiveVersion), \"hiveVersion cannot be null\");\n\n        this.hiveVersion = hiveVersion;\n        this.hiveShim = HiveShimLoader.loadHiveShim(hiveVersion);\n        this.factory = new HiveFunctionDefinitionFactory(hiveShim);\n        this.functionNames = new HashSet<>();\n        this.map = new HashMap<>();\n    }\n\n    @Override\n    public Set<String> listFunctions() {\n        // lazy initialize\n        if (functionNames.isEmpty()) {\n            functionNames = hiveShim.listBuiltInFunctions();\n            functionNames.removeAll(BUILT_IN_FUNC_BLACKLIST);\n            functionNames.add(\"grouping\");\n            functionNames.add(GenericUDFLegacyGroupingID.NAME);\n            functionNames.addAll(map.keySet());\n        }\n        return functionNames;\n    }\n\n    @Override\n    public Optional<FunctionDefinition> getFunctionDefinition(String name) {\n        if (BUILT_IN_FUNC_BLACKLIST.contains(name)) {\n            return Optional.empty();\n        }\n        // We override Hive's grouping function. Refer to the implementation for more details.\n        if (name.equalsIgnoreCase(\"grouping\")) {\n            return Optional.of(\n                    factory.createFunctionDefinitionFromHiveFunction(\n                            name, HiveGenericUDFGrouping.class.getName()));\n        }\n\n        // this function is used to generate legacy GROUPING__ID value for old hive versions\n        if (name.equalsIgnoreCase(GenericUDFLegacyGroupingID.NAME)) {\n            return Optional.of(\n                    factory.createFunctionDefinitionFromHiveFunction(\n                            name, GenericUDFLegacyGroupingID.class.getName()));\n        }\n\n        Optional<FunctionInfo> info = hiveShim.getBuiltInFunctionInfo(name);\n\n        if (info.isPresent()) {\n            return info.map(\n                    functionInfo ->\n                            factory.createFunctionDefinitionFromHiveFunction(\n                                    name, functionInfo.getFunctionClass().getName()));\n        } else {\n            return Optional.ofNullable(this.map.get(name))\n                    .map(hiveUDFClassName -> factory.createFunctionDefinitionFromHiveFunction(name, hiveUDFClassName));\n        }\n    }\n\n    public String getHiveVersion() {\n        return hiveVersion;\n    }\n\n    private final Map<String, String> map;\n\n    public void registryHiveUDF(String hiveUDFName, String hiveUDFClassName) {\n        this.map.put(hiveUDFName, hiveUDFClassName);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/HiveUDFRegistryTest.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDFRegistryTest {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n\n        tEnv.unloadModule(\"core\");\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String[] s = tEnv.listFunctions();\n\n        String[] s1 = tEnv.listUserDefinedFunctions();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/HiveUDFRegistryUnloadTest.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDFRegistryUnloadTest {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.unloadModule(\"core\");\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String[] s = tEnv.listFunctions();\n\n        String[] s1 = tEnv.listUserDefinedFunctions();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_01_GenericUDAFResolver2/HiveUDAF_hive_module_registry_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._01_GenericUDAFResolver2;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDAF_hive_module_registry_Test {\n\n    public static void main(String[] args) throws IOException {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO 可以成功执行没有任何问题\n        flinkEnv.hiveModuleV2().registryHiveUDF(\"test_hive_udaf\", TestHiveUDAF.class.getName());\n\n        String sql3 = \"select test_hive_udaf(user_id)\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by 0\";\n\n        flinkEnv.batchTEnv()\n                .executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_01_GenericUDAFResolver2/HiveUDAF_sql_registry_create_function_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._01_GenericUDAFResolver2;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDAF_sql_registry_create_function_Test {\n\n    public static void main(String[] args) throws ClassNotFoundException, IOException {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO sql 执行创建 hive udaf 可以正常执行，create function 执行完成之后就会被注册到 hive catalog 中\n\n        String sql2 = \"CREATE FUNCTION test_hive_udaf as 'flink.examples.sql._08.batch._03_hive_udf._01_GenericUDAFResolver2.TestHiveUDAF'\";\n\n        String sql3 = \"select default.test_hive_udaf(user_id, '20210920')\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by 0\";\n\n        flinkEnv.batchTEnv().executeSql(sql2);\n        flinkEnv.batchTEnv().executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_01_GenericUDAFResolver2/HiveUDAF_sql_registry_create_temporary_function_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._01_GenericUDAFResolver2;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDAF_sql_registry_create_temporary_function_Test {\n\n    public static void main(String[] args) throws ClassNotFoundException, IOException {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO sql 执行创建 hive udtf 会报错\n        //  java.lang.UnsupportedOperationException: This CatalogFunction is a InlineCatalogFunction. This method should not be called.\n        //  因为 CREATE TEMPORARY FUNCTION 使用的是 inline catalog\n\n        String sql2 = \"CREATE TEMPORARY FUNCTION test_hive_udaf as 'flink.examples.sql._08.batch._03_hive_udf._01_GenericUDAFResolver2.TestHiveUDAF'\";\n\n        String sql3 = \"select test_hive_udaf(user_id, '20210920')\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by 0\";\n\n        flinkEnv.batchTEnv().executeSql(sql2);\n        flinkEnv.batchTEnv().executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_01_GenericUDAFResolver2/TestHiveUDAF.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._01_GenericUDAFResolver2;\n\nimport org.apache.hadoop.hive.ql.metadata.HiveException;\nimport org.apache.hadoop.hive.ql.parse.SemanticException;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDAFParameterInfo;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDAFResolver2;\nimport org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils;\nimport org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;\nimport org.apache.hadoop.io.Text;\n\npublic class TestHiveUDAF implements GenericUDAFResolver2 {\n\n    public GenericUDAFEvaluator getEvaluator(TypeInfo[] parameters) throws SemanticException {\n        return new InneGenericUDAFEvaluatorr();\n    }\n\n\n    public GenericUDAFEvaluator getEvaluator(GenericUDAFParameterInfo paramInfo) throws SemanticException {\n\n        return new InneGenericUDAFEvaluatorr();\n    }\n\n\n    public static class InneGenericUDAFEvaluatorr extends GenericUDAFEvaluator {\n        private PrimitiveObjectInspector inputOI;\n\n        @Override\n        public ObjectInspector init(Mode m, ObjectInspector[] parameters) throws HiveException {\n            super.init(m, parameters);\n            this.inputOI = (PrimitiveObjectInspector) parameters[0];\n            return PrimitiveObjectInspectorFactory.writableStringObjectInspector;\n        }\n\n        static class StringAgg implements AggregationBuffer {\n            String all = \"\";\n        }\n\n        @Override\n        public AggregationBuffer getNewAggregationBuffer() throws HiveException {\n            StringAgg stringAgg = new StringAgg();\n            return stringAgg;\n        }\n\n        @Override\n        public void reset(AggregationBuffer agg) throws HiveException {\n            StringAgg stringAgg = (StringAgg) agg;\n            stringAgg.all = \"\";\n        }\n\n        @Override\n        public void iterate(AggregationBuffer agg, Object[] parameters) throws HiveException {\n            StringAgg myagg = (StringAgg) agg;\n\n            String inputStr = PrimitiveObjectInspectorUtils.getString(parameters[0], inputOI);\n\n            myagg.all += inputStr;\n        }\n\n        @Override\n        public Object terminatePartial(AggregationBuffer agg) throws HiveException {\n            return this.terminate(agg);\n        }\n\n        @Override\n        public void merge(AggregationBuffer agg, Object partial) throws HiveException {\n            if (partial != null) {\n                StringAgg stringAgg = (StringAgg) agg;\n\n                stringAgg.all += partial;\n            }\n        }\n\n        @Override\n        public Object terminate(AggregationBuffer agg) throws HiveException {\n\n            return new Text(((StringAgg) agg).all);\n        }\n\n\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_02_GenericUDTF/HiveUDTF_hive_module_registry_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._02_GenericUDTF;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDTF_hive_module_registry_Test {\n\n    public static void main(String[] args) throws IOException {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO 可以成功执行没有任何问题\n        flinkEnv.hiveModuleV2().registryHiveUDF(\"test_hive_udtf\", TestHiveUDTF.class.getName());\n\n        String sql3 = \"select test_hive_udtf(user_id) as (a)\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\";\n\n        flinkEnv.batchTEnv()\n                .executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_02_GenericUDTF/HiveUDTF_sql_registry_create_function_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._02_GenericUDTF;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDTF_sql_registry_create_function_Test {\n\n    public static void main(String[] args) throws ClassNotFoundException, IOException {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n//        String sql = \"drop function default.test_hive_udtf\";\n\n        // TODO sql 执行正常，create function 使用的是 hive catalog 没有任何问题\n        String sql2 = \"CREATE FUNCTION test_hive_udtf as 'flink.examples.sql._08.batch._03_hive_udf._02_GenericUDTF.TestHiveUDTF'\";\n\n        String sql3 = \"select default.test_hive_udtf(user_id)\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\";\n\n//        flinkEnv.batchTEnv().executeSql(sql);\n        flinkEnv.batchTEnv().executeSql(sql2);\n        flinkEnv.batchTEnv().executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_02_GenericUDTF/HiveUDTF_sql_registry_create_temporary_function_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._02_GenericUDTF;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDTF_sql_registry_create_temporary_function_Test {\n\n    public static void main(String[] args) throws ClassNotFoundException, IOException {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO sql 执行创建 hive udtf 会报错\n        //  Caused by: java.lang.UnsupportedOperationException: This CatalogFunction is a InlineCatalogFunction. This method should not be called.\n        String sql2 = \"CREATE TEMPORARY FUNCTION test_hive_udtf as 'flink.examples.sql._08.batch._03_hive_udf._02_GenericUDTF.TestHiveUDTF'\";\n\n        String sql3 = \"select default.test_hive_udtf(user_id)\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\";\n\n        flinkEnv.batchTEnv().executeSql(sql2);\n        flinkEnv.batchTEnv().executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_02_GenericUDTF/TestHiveUDTF.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._02_GenericUDTF;\n\nimport java.util.ArrayList;\n\nimport org.apache.hadoop.hive.ql.exec.UDFArgumentException;\nimport org.apache.hadoop.hive.ql.metadata.HiveException;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;\nimport org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;\nimport org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;\n\npublic class TestHiveUDTF extends GenericUDTF {\n\n    @Override\n    public StructObjectInspector initialize(ObjectInspector[] argOIs) throws UDFArgumentException {\n        ArrayList<String> fieldNames = new ArrayList<String>() {{\n            add(\"column1\");\n        }};\n        ArrayList<ObjectInspector> fieldOIs = new ArrayList<ObjectInspector>() {{\n            add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);\n        }};\n\n        return ObjectInspectorFactory.getStandardStructObjectInspector(fieldNames, fieldOIs);\n    }\n\n    @Override\n    public void process(Object[] objects) throws HiveException {\n\n        forward(objects[0]);\n        forward(objects[0]);\n\n    }\n\n    @Override\n    public void close() throws HiveException {\n\n    }\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_03_built_in_udf/_01_get_json_object/HiveUDF_get_json_object_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._03_built_in_udf._01_get_json_object;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_get_json_object_Test {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 = \"select get_json_object(user_id, '$.user_id')\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \"    group by get_json_object(user_id, '$.user_id')\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_03_built_in_udf/_02_rlike/HiveUDF_rlike_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._03_built_in_udf._02_rlike;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\n\nimport flink.examples.sql._08.batch._03_hive_udf.HiveModuleV2;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_rlike_Test {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.13.5 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.HIVE);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n\n        // TODO hive module 才支持 rLike\n        String sql3 = \"with tmp as (select case when user_id rlike 'a' then 1 else 0 end as b -- 注释\\n\"\n                + \"         , count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date = '20210920'\\n\"\n                + \"    group by user_id) \\n\"\n                + \"\\n\"\n                + \"select * from tmp\";\n\n        tEnv.executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_04_GenericUDF/HiveUDF_hive_module_registry_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_hive_module_registry_Test {\n\n    public static void main(String[] args) throws IOException {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO 可以正常执行\n        flinkEnv.hiveModuleV2().registryHiveUDF(\"test_hive_udf\", TestGenericUDF.class.getName());\n\n        String sql3 = \"select test_hive_udf(user_id)\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\";\n\n        flinkEnv.batchTEnv()\n                .executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_04_GenericUDF/HiveUDF_sql_registry_create_function_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_sql_registry_create_function_Test {\n\n    public static void main(String[] args) throws ClassNotFoundException, IOException {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO sql 执行创建 hive udf 可以正常执行，create function 执行完成之后就会被注册到 hive catalog 中\n        String sql2 = \"CREATE FUNCTION test_hive_udf as 'flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF'\";\n\n        String sql3 = \"select test_hive_udf(user_id)\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\";\n\n        flinkEnv.batchTEnv().executeSql(sql2);\n        flinkEnv.batchTEnv().executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_04_GenericUDF/HiveUDF_sql_registry_create_temporary_function_Test.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF;\n\nimport java.io.IOException;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_sql_registry_create_temporary_function_Test {\n\n    public static void main(String[] args) throws ClassNotFoundException, IOException {\n        FlinkEnv flinkEnv = FlinkEnvUtils.getBatchTableEnv(args);\n\n        // TODO sql 执行创建 hive udf 可以正常执行\n        String sql2 = \"CREATE TEMPORARY FUNCTION test_hive_udf as 'flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF'\";\n\n        String sql3 = \"select test_hive_udf(user_id)\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\";\n\n        flinkEnv.batchTEnv().executeSql(sql2);\n        flinkEnv.batchTEnv().executeSql(sql3)\n                .print();\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_03_hive_udf/_04_GenericUDF/TestGenericUDF.java",
    "content": "package flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF;\n\nimport org.apache.hadoop.hive.ql.exec.UDFArgumentException;\nimport org.apache.hadoop.hive.ql.metadata.HiveException;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDF;\nimport org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector;\nimport org.apache.hadoop.io.Text;\n\npublic class TestGenericUDF extends GenericUDF {\n\n    private transient StringObjectInspector soi = null;\n\n    private transient StringObjectInspector soi1 = null;\n\n    @Override\n    public ObjectInspector initialize(ObjectInspector[] arguments) throws UDFArgumentException {\n        PrimitiveObjectInspector primitiveObjectInspector = (PrimitiveObjectInspector) arguments[0];\n        soi = (StringObjectInspector) primitiveObjectInspector;\n        return PrimitiveObjectInspectorFactory\n                .getPrimitiveWritableObjectInspector(PrimitiveObjectInspector.PrimitiveCategory.STRING);\n    }\n\n    @Override\n    public Object evaluate(DeferredObject[] arguments) throws HiveException {\n        return new Text(\"UNKNOWN\");\n    }\n\n    @Override\n    public String getDisplayString(String[] children) {\n        return \"test\";\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_04_flink_udf/FlinkUDAF_Test.java",
    "content": "package flink.examples.sql._08.batch._04_flink_udf;\n\npublic class FlinkUDAF_Test {\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_04_flink_udf/FlinkUDF_Test.java",
    "content": "package flink.examples.sql._08.batch._04_flink_udf;\n\npublic class FlinkUDF_Test {\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_04_flink_udf/FlinkUDTF_Test.java",
    "content": "package flink.examples.sql._08.batch._04_flink_udf;\n\npublic class FlinkUDTF_Test {\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_08/batch/_05_test/_01_batch_to_datastream/Test.java",
    "content": "package flink.examples.sql._08.batch._05_test._01_batch_to_datastream;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\n\npublic class Test {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inBatchMode()\n                .build();\n\n        TableEnvironment tEnv = TableEnvironment.create(settings);\n\n        // TODO 这一行会抛出异常\n        StreamTableEnvironment t1Env = StreamTableEnvironment.create(env, settings);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_01_hive_udf/_01_GenericUDF/HiveUDF_sql_registry_create_function_Test.java",
    "content": "package flink.examples.sql._09.udf._01_hive_udf._01_GenericUDF;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_sql_registry_create_function_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        // TODO stream sql hive udf 创建不报错，执行使用报错 class cast exception\n        String sql2 = \"CREATE FUNCTION test_hive_udf as 'flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF'\";\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.order_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    count_result BIGINT,\\n\"\n                + \"    sum_result BIGINT,\\n\"\n                + \"    avg_result DOUBLE,\\n\"\n                + \"    min_result BIGINT,\\n\"\n                + \"    max_result BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select test_hive_udf(order_id) as order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUP AGG 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sql2);\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_01_hive_udf/_01_GenericUDF/HiveUDF_sql_registry_create_function_with_hive_catalog_Test.java",
    "content": "package flink.examples.sql._09.udf._01_hive_udf._01_GenericUDF;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_sql_registry_create_function_with_hive_catalog_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.catalog\", \"true\"});\n\n        // TODO stream sql hive udf 成功，底层可能调用了 hive 相关的逻辑，所以能成功\n//        String sql2 = \"CREATE FUNCTION test_hive_udf as 'flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF'\";\n\n        String sql =\n//                \"CREATE TABLE source_table (\\n\"\n//                + \"    order_id STRING,\\n\"\n//                + \"    price BIGINT\\n\"\n//                + \") WITH (\\n\"\n//                + \"  'connector' = 'datagen',\\n\"\n//                + \"  'rows-per-second' = '10',\\n\"\n//                + \"  'fields.order_id.length' = '1',\\n\"\n//                + \"  'fields.price.min' = '1',\\n\"\n//                + \"  'fields.price.max' = '1000000'\\n\"\n//                + \");\\n\"\n//                + \"\\n\"\n//                + \"\\n\"\n//                + \"CREATE TABLE sink_table (\\n\"\n//                + \"    order_id STRING,\\n\"\n//                + \"    count_result BIGINT,\\n\"\n//                + \"    sum_result BIGINT,\\n\"\n//                + \"    avg_result DOUBLE,\\n\"\n//                + \"    min_result BIGINT,\\n\"\n//                + \"    max_result BIGINT\\n\"\n//                + \") WITH (\\n\"\n//                + \"  'connector' = 'print'\\n\"\n//                + \");\\n\"\n//                + \"\\n\"\n//                +\n                \"insert into sink_table\\n\"\n                + \"select test_hive_udf(order_id) as order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUP AGG 案例\");\n\n//        flinkEnv.streamTEnv().executeSql(sql2);\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_01_hive_udf/_01_GenericUDF/HiveUDF_sql_registry_create_temporary_function_Test.java",
    "content": "package flink.examples.sql._09.udf._01_hive_udf._01_GenericUDF;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_sql_registry_create_temporary_function_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        // TODO stream sql 执行 hive udf 创建不报错，执行使用报错\n        //  Caused by: java.lang.ClassCastException: flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF cannot be cast to org.apache.flink.table.functions.UserDefinedFunction\n        String sql2 = \"CREATE TEMPORARY FUNCTION test_hive_udf as 'flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF'\";\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    price BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.order_id.length' = '1',\\n\"\n                + \"  'fields.price.min' = '1',\\n\"\n                + \"  'fields.price.max' = '1000000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    order_id STRING,\\n\"\n                + \"    count_result BIGINT,\\n\"\n                + \"    sum_result BIGINT,\\n\"\n                + \"    avg_result DOUBLE,\\n\"\n                + \"    min_result BIGINT,\\n\"\n                + \"    max_result BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select test_hive_udf(order_id) as order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUP AGG 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sql2);\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_01_hive_udf/_01_GenericUDF/HiveUDF_sql_registry_create_temporary_function_with_hive_catalog_Test.java",
    "content": "package flink.examples.sql._09.udf._01_hive_udf._01_GenericUDF;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_sql_registry_create_temporary_function_with_hive_catalog_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[]{\"--enable.hive.catalog\", \"true\", \"--enable.hive.dialect\", \"true\"});\n\n        // TODO stream sql 执行 hive udf 创建不报错，执行使用报错\n        //  Caused by: java.lang.ClassCastException: flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF cannot be cast to org.apache.flink.table.functions.UserDefinedFunction\n        String sql2 = \"CREATE TEMPORARY FUNCTION test_hive_udf as 'flink.examples.sql._08.batch._03_hive_udf._04_GenericUDF.TestGenericUDF'\";\n\n        String sql =\n//                \"CREATE TABLE source_table (\\n\"\n//                + \"    order_id STRING,\\n\"\n//                + \"    price BIGINT\\n\"\n//                + \") WITH (\\n\"\n//                + \"  'connector' = 'datagen',\\n\"\n//                + \"  'rows-per-second' = '10',\\n\"\n//                + \"  'fields.order_id.length' = '1',\\n\"\n//                + \"  'fields.price.min' = '1',\\n\"\n//                + \"  'fields.price.max' = '1000000'\\n\"\n//                + \");\\n\"\n//                + \"\\n\"\n//                + \"\\n\"\n//                + \"CREATE TABLE sink_table (\\n\"\n//                + \"    order_id STRING,\\n\"\n//                + \"    count_result BIGINT,\\n\"\n//                + \"    sum_result BIGINT,\\n\"\n//                + \"    avg_result DOUBLE,\\n\"\n//                + \"    min_result BIGINT,\\n\"\n//                + \"    max_result BIGINT\\n\"\n//                + \") WITH (\\n\"\n//                + \"  'connector' = 'print'\\n\"\n//                + \");\\n\"\n//                + \"\\n\"\n//                +\n                        \"insert into sink_table\\n\"\n                + \"select test_hive_udf(order_id) as order_id,\\n\"\n                + \"       count(*) as count_result,\\n\"\n                + \"       sum(price) as sum_result,\\n\"\n                + \"       avg(price) as avg_result,\\n\"\n                + \"       min(price) as min_result,\\n\"\n                + \"       max(price) as max_result\\n\"\n                + \"from source_table\\n\"\n                + \"group by order_id\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"GROUP AGG 案例\");\n\n        flinkEnv.streamTEnv().executeSql(sql2);\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_01_hive_udf/_01_GenericUDF/TestGenericUDF.java",
    "content": "package flink.examples.sql._09.udf._01_hive_udf._01_GenericUDF;\n\nimport org.apache.hadoop.hive.ql.exec.UDFArgumentException;\nimport org.apache.hadoop.hive.ql.metadata.HiveException;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDF;\nimport org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector;\nimport org.apache.hadoop.io.Text;\n\npublic class TestGenericUDF extends GenericUDF {\n\n    private transient StringObjectInspector soi = null;\n\n    private transient StringObjectInspector soi1 = null;\n\n    @Override\n    public ObjectInspector initialize(ObjectInspector[] arguments) throws UDFArgumentException {\n        PrimitiveObjectInspector primitiveObjectInspector = (PrimitiveObjectInspector) arguments[0];\n        soi = (StringObjectInspector) primitiveObjectInspector;\n        return PrimitiveObjectInspectorFactory\n                .getPrimitiveWritableObjectInspector(PrimitiveObjectInspector.PrimitiveCategory.STRING);\n    }\n\n    @Override\n    public Object evaluate(DeferredObject[] arguments) throws HiveException {\n        return new Text(\"UNKNOWN\");\n    }\n\n    @Override\n    public String getDisplayString(String[] children) {\n        return \"test\";\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/HiveUDF_Error_Test.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_Error_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.v2\", \"false\"});\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `params` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._09.udf._02_stream_hive_udf.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `log_id` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       get_json_object(params, '$.log_id') as log_id\\n\"\n                + \"from source_table\\n\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"Hive UDF 测试案例\");\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/HiveUDF_create_temporary_error_Test.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_create_temporary_error_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TEMPORARY FUNCTION test_hive_udf as 'flink.examples.sql._09.udf._02_stream_hive_udf.TestGenericUDF';\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `params` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._09.udf._02_stream_hive_udf.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `log_id` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       test_hive_udf(params) as log_id\\n\"\n                + \"from source_table\\n\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"Hive UDF 测试案例\");\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/HiveUDF_hive_module_registry_Test.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\npublic class HiveUDF_hive_module_registry_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `params` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._09.udf._02_stream_hive_udf.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `log_id` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       test_hive_udf(params) as log_id\\n\"\n                + \"from source_table\\n\";\n\n        flinkEnv.hiveModuleV2()\n                .registryHiveUDF(\n                        \"test_hive_udf\"\n                        , TestGenericUDF.class.getName());\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"Hive UDF 测试案例\");\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/HiveUDF_load_first_Test.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_load_first_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `params` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._09.udf._02_stream_hive_udf.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `log_id` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       get_json_object(params, '$.log_id') as log_id\\n\"\n                + \"from source_table\\n\";\n\n        Arrays.stream(flinkEnv.streamTEnv().listModules()).forEach(System.out::println);\n\n        Arrays.stream(flinkEnv.streamTEnv().listFunctions()).forEach(System.out::println);\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"Hive UDF 测试案例\");\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/HiveUDF_load_second_Test.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n *\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class HiveUDF_load_second_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(new String[] {\"--enable.hive.module.load-first\", \"false\"});\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `params` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._09.udf._02_stream_hive_udf.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `log_id` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       get_json_object(params, '$.log_id') as log_id\\n\"\n                + \"from source_table\\n\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"Hive UDF 测试案例\");\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/TestGenericUDF.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport org.apache.hadoop.hive.ql.exec.UDFArgumentException;\nimport org.apache.hadoop.hive.ql.metadata.HiveException;\nimport org.apache.hadoop.hive.ql.udf.generic.GenericUDF;\nimport org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;\nimport org.apache.hadoop.hive.serde2.objectinspector.primitive.StringObjectInspector;\nimport org.apache.hadoop.io.Text;\n\npublic class TestGenericUDF extends GenericUDF {\n\n    private transient StringObjectInspector soi = null;\n\n    private transient StringObjectInspector soi1 = null;\n\n    @Override\n    public ObjectInspector initialize(ObjectInspector[] arguments) throws UDFArgumentException {\n        PrimitiveObjectInspector primitiveObjectInspector = (PrimitiveObjectInspector) arguments[0];\n        soi = (StringObjectInspector) primitiveObjectInspector;\n        return PrimitiveObjectInspectorFactory\n                .getPrimitiveWritableObjectInspector(PrimitiveObjectInspector.PrimitiveCategory.STRING);\n    }\n\n    @Override\n    public Object evaluate(DeferredObject[] arguments) throws HiveException {\n        return new Text(\"UNKNOWN\");\n    }\n\n    @Override\n    public String getDisplayString(String[] children) {\n        return \"test\";\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_02_stream_hive_udf/UserDefinedSource.java",
    "content": "package flink.examples.sql._09.udf._02_stream_hive_udf;\n\nimport org.apache.flink.api.common.serialization.DeserializationSchema;\nimport org.apache.flink.streaming.api.functions.source.RichSourceFunction;\nimport org.apache.flink.table.data.RowData;\n\nimport com.google.common.collect.ImmutableMap;\n\nimport flink.examples.JacksonUtils;\n\npublic class UserDefinedSource extends RichSourceFunction<RowData> {\n\n    private DeserializationSchema<RowData> dser;\n\n    private volatile boolean isCancel;\n\n    public UserDefinedSource(DeserializationSchema<RowData> dser) {\n        this.dser = dser;\n    }\n\n    @Override\n    public void run(SourceContext<RowData> ctx) throws Exception {\n\n        int i = 0;\n\n        while (!this.isCancel) {\n            ctx.collect(this.dser.deserialize(\n                    JacksonUtils.bean2Json(ImmutableMap.of(\"user_id\", 1111L, \"params\", \"{\\\"log_id\\\":\\\"\" + i + \"\\\"}\")).getBytes()\n            ));\n            Thread.sleep(1000);\n\n            i++;\n        }\n    }\n\n    @Override\n    public void cancel() {\n        this.isCancel = true;\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_03_advanced_type_inference/AdvancedFunctionsExample.java",
    "content": "package flink.examples.sql._09.udf._03_advanced_type_inference;\n\nimport java.time.LocalDate;\n\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableEnvironment;\nimport org.apache.flink.types.Row;\n\npublic class AdvancedFunctionsExample {\n\n    public static void main(String[] args) throws Exception {\n        // setup the environment\n        final EnvironmentSettings settings =\n                EnvironmentSettings.newInstance().inBatchMode().build();\n        final TableEnvironment env = TableEnvironment.create(settings);\n\n        // execute different kinds of functions\n        executeLastDatedValueFunction(env);\n        executeInternalRowMergerFunction(env);\n    }\n\n    /**\n     * Aggregates data by name and returns the latest non-null {@code item_count} value with its\n     * corresponding {@code order_date}.\n     */\n    private static void executeLastDatedValueFunction(TableEnvironment env) {\n        // create a table with example data\n        final Table customers =\n                env.fromValues(\n                        DataTypes.of(\"ROW<name STRING, order_date DATE, item_count INT>\"),\n                        Row.of(\"Guillermo Smith\", LocalDate.parse(\"2020-12-01\"), 3),\n                        Row.of(\"Guillermo Smith\", LocalDate.parse(\"2020-12-05\"), 5),\n                        Row.of(\"Valeria Mendoza\", LocalDate.parse(\"2020-03-23\"), 4),\n                        Row.of(\"Valeria Mendoza\", LocalDate.parse(\"2020-06-02\"), 10),\n                        Row.of(\"Leann Holloway\", LocalDate.parse(\"2020-05-26\"), 9),\n                        Row.of(\"Leann Holloway\", LocalDate.parse(\"2020-05-27\"), null),\n                        Row.of(\"Brandy Sanders\", LocalDate.parse(\"2020-10-14\"), 1),\n                        Row.of(\"John Turner\", LocalDate.parse(\"2020-10-02\"), 12),\n                        Row.of(\"Ellen Ortega\", LocalDate.parse(\"2020-06-18\"), 100));\n        env.createTemporaryView(\"customers\", customers);\n\n        // register and execute the function\n        env.createTemporarySystemFunction(\"LastDatedValueFunction\", LastDatedValueFunction.class);\n        env.executeSql(\n                \"SELECT name, LastDatedValueFunction(item_count, order_date) \"\n                        + \"FROM customers GROUP BY name\")\n                .print();\n\n        // clean up\n        env.dropTemporaryView(\"customers\");\n    }\n\n    /** Merges two rows as efficient as possible using internal data structures. */\n    private static void executeInternalRowMergerFunction(TableEnvironment env) {\n        // create a table with example data\n        final Table customers =\n                env.fromValues(\n                        DataTypes.of(\n                                \"ROW<name STRING, data1 ROW<birth_date DATE>, data2 ROW<city STRING, phone STRING>>\"),\n                        Row.of(\n                                \"Guillermo Smith\",\n                                Row.of(LocalDate.parse(\"1992-12-12\")),\n                                Row.of(\"New Jersey\", \"816-443-8010\")),\n                        Row.of(\n                                \"Valeria Mendoza\",\n                                Row.of(LocalDate.parse(\"1970-03-28\")),\n                                Row.of(\"Los Angeles\", \"928-264-9662\")),\n                        Row.of(\n                                \"Leann Holloway\",\n                                Row.of(LocalDate.parse(\"1989-05-21\")),\n                                Row.of(\"Eugene\", \"614-889-6038\")));\n        env.createTemporaryView(\"customers\", customers);\n\n        // register and execute the function\n        env.createTemporarySystemFunction(\n                \"InternalRowMergerFunction\", InternalRowMergerFunction.class);\n        env.executeSql(\"SELECT name, InternalRowMergerFunction(data1, data2) FROM customers\")\n                .print();\n\n        // clean up\n        env.dropTemporaryView(\"customers\");\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_03_advanced_type_inference/InternalRowMergerFunction.java",
    "content": "package flink.examples.sql._09.udf._03_advanced_type_inference;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Optional;\nimport java.util.stream.IntStream;\n\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.catalog.DataTypeFactory;\nimport org.apache.flink.table.data.RowData;\nimport org.apache.flink.table.data.utils.JoinedRowData;\nimport org.apache.flink.table.functions.FunctionDefinition;\nimport org.apache.flink.table.functions.ScalarFunction;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.inference.ArgumentCount;\nimport org.apache.flink.table.types.inference.CallContext;\nimport org.apache.flink.table.types.inference.ConstantArgumentCount;\nimport org.apache.flink.table.types.inference.InputTypeStrategy;\nimport org.apache.flink.table.types.inference.Signature;\nimport org.apache.flink.table.types.inference.Signature.Argument;\nimport org.apache.flink.table.types.inference.TypeInference;\nimport org.apache.flink.table.types.logical.LogicalTypeRoot;\n\npublic class InternalRowMergerFunction extends ScalarFunction {\n\n    // --------------------------------------------------------------------------------------------\n    // Planning\n    // --------------------------------------------------------------------------------------------\n\n    @Override\n    public TypeInference getTypeInference(DataTypeFactory typeFactory) {\n        return TypeInference.newBuilder()\n                // accept a signature (ROW, ROW) with arbitrary field types but\n                // with internal conversion classes\n                .inputTypeStrategy(\n                        new InputTypeStrategy() {\n                            @Override\n                            public ArgumentCount getArgumentCount() {\n                                // the argument count is checked before input types are inferred\n                                return ConstantArgumentCount.of(2);\n                            }\n\n                            @Override\n                            public Optional<List<DataType>> inferInputTypes(\n                                    CallContext callContext, boolean throwOnFailure) {\n                                final List<DataType> args = callContext.getArgumentDataTypes();\n                                final DataType arg0 = args.get(0);\n                                final DataType arg1 = args.get(1);\n                                // perform some basic validation based on the logical type\n                                if (arg0.getLogicalType().getTypeRoot() != LogicalTypeRoot.ROW\n                                        || arg1.getLogicalType().getTypeRoot()\n                                        != LogicalTypeRoot.ROW) {\n                                    if (throwOnFailure) {\n                                        throw callContext.newValidationError(\n                                                \"Two row arguments expected.\");\n                                    }\n                                    return Optional.empty();\n                                }\n                                // keep the original logical type but express that both arguments\n                                // should use internal data structures\n                                return Optional.of(\n                                        Arrays.asList(\n                                                arg0.bridgedTo(RowData.class),\n                                                arg1.bridgedTo(RowData.class)));\n                            }\n\n                            @Override\n                            public List<Signature> getExpectedSignatures(\n                                    FunctionDefinition definition) {\n                                // this helps in printing nice error messages\n                                return Collections.singletonList(\n                                        Signature.of(Argument.of(\"ROW\"), Argument.of(\"ROW\")));\n                            }\n                        })\n                .outputTypeStrategy(\n                        callContext -> {\n                            // merge fields and give them a unique name\n                            final List<DataType> args = callContext.getArgumentDataTypes();\n                            final List<DataType> allFieldDataTypes = new ArrayList<>();\n                            allFieldDataTypes.addAll(args.get(0).getChildren());\n                            allFieldDataTypes.addAll(args.get(1).getChildren());\n                            final DataTypes.Field[] fields =\n                                    IntStream.range(0, allFieldDataTypes.size())\n                                            .mapToObj(\n                                                    i ->\n                                                            DataTypes.FIELD(\n                                                                    \"f\" + i,\n                                                                    allFieldDataTypes.get(i)))\n                                            .toArray(DataTypes.Field[]::new);\n                            // create a new row with the merged fields and express that the return\n                            // type will use an internal data structure\n                            return Optional.of(DataTypes.ROW(fields).bridgedTo(RowData.class));\n                        })\n                .build();\n    }\n\n    // --------------------------------------------------------------------------------------------\n    // Runtime\n    // --------------------------------------------------------------------------------------------\n\n    public RowData eval(RowData r1, RowData r2) {\n        return new JoinedRowData(r1, r2);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_03_advanced_type_inference/LastDatedValueFunction.java",
    "content": "package flink.examples.sql._09.udf._03_advanced_type_inference;\n\nimport java.time.LocalDate;\nimport java.util.Optional;\n\nimport org.apache.flink.table.api.DataTypes;\nimport org.apache.flink.table.catalog.DataTypeFactory;\nimport org.apache.flink.table.functions.AggregateFunction;\nimport org.apache.flink.table.types.DataType;\nimport org.apache.flink.table.types.inference.InputTypeStrategies;\nimport org.apache.flink.table.types.inference.TypeInference;\nimport org.apache.flink.types.Row;\n\nimport flink.examples.sql._09.udf._03_advanced_type_inference.LastDatedValueFunction.Accumulator;\n\npublic class LastDatedValueFunction<T>\n        extends AggregateFunction<Row, Accumulator<T>> {\n\n    // --------------------------------------------------------------------------------------------\n    // Planning\n    // --------------------------------------------------------------------------------------------\n\n    /**\n     * Declares the {@link TypeInference} of this function. It specifies:\n     *\n     * <ul>\n     *   <li>which argument types are supported when calling this function,\n     *   <li>which {@link DataType#getConversionClass()} should be used when calling the JVM method\n     *       {@link #accumulate(Accumulator, Object, LocalDate)} during runtime,\n     *   <li>a similar strategy how to derive an accumulator type,\n     *   <li>and a similar strategy how to derive the output type.\n     * </ul>\n     */\n    @Override\n    public TypeInference getTypeInference(DataTypeFactory typeFactory) {\n        return TypeInference.newBuilder()\n                // accept a signature (ANY, DATE) both with default conversion classes,\n                // the input type strategy is mostly used to produce nicer validation exceptions\n                // during planning, implementers can decide to skip it if they are fine with failing\n                // at a later stage during code generation when the runtime method is checked\n                .inputTypeStrategy(\n                        InputTypeStrategies.sequence(\n                                InputTypeStrategies.ANY,\n                                InputTypeStrategies.explicit(DataTypes.DATE())))\n                // let the accumulator data type depend on the first input argument\n                .accumulatorTypeStrategy(\n                        callContext -> {\n                            final DataType argDataType = callContext.getArgumentDataTypes().get(0);\n                            final DataType accDataType =\n                                    DataTypes.STRUCTURED(\n                                            Accumulator.class,\n                                            DataTypes.FIELD(\"value\", argDataType),\n                                            DataTypes.FIELD(\"date\", DataTypes.DATE()));\n                            return Optional.of(accDataType);\n                        })\n                // let the output data type depend on the first input argument\n                .outputTypeStrategy(\n                        callContext -> {\n                            final DataType argDataType = callContext.getArgumentDataTypes().get(0);\n                            final DataType outputDataType =\n                                    DataTypes.ROW(\n                                            DataTypes.FIELD(\"value\", argDataType),\n                                            DataTypes.FIELD(\"date\", DataTypes.DATE()));\n                            return Optional.of(outputDataType);\n                        })\n                .build();\n    }\n\n    // --------------------------------------------------------------------------------------------\n    // Runtime\n    // --------------------------------------------------------------------------------------------\n\n    /**\n     * Generic accumulator for representing state. It will contain different kind of instances for\n     * {@code value} depending on actual call in the query.\n     */\n    public static class Accumulator<T> {\n        public T value;\n        public LocalDate date;\n    }\n\n    @Override\n    public Accumulator<T> createAccumulator() {\n        return new Accumulator<>();\n    }\n\n    /**\n     * Generic runtime function that will be called with different kind of instances for {@code\n     * input} depending on actual call in the query.\n     */\n    public void accumulate(Accumulator<T> acc, T input, LocalDate date) {\n        if (input != null && (acc.date == null || date.isAfter(acc.date))) {\n            acc.value = input;\n            acc.date = date;\n        }\n    }\n\n    @Override\n    public Row getValue(Accumulator<T> acc) {\n        return Row.of(acc.value, acc.date);\n    }\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_04_udf/UDAF_Test.java",
    "content": "package flink.examples.sql._09.udf._04_udf;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.TreeSet;\n\nimport org.apache.flink.api.common.accumulators.Accumulator;\nimport org.apache.flink.api.common.typeinfo.TypeHint;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.table.annotation.DataTypeHint;\nimport org.apache.flink.table.annotation.FunctionHint;\nimport org.apache.flink.table.functions.AggregateFunction;\nimport org.apache.flink.table.functions.ScalarFunction;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\nimport flink.examples.JacksonUtils;\nimport lombok.AllArgsConstructor;\nimport lombok.Data;\nimport lombok.NoArgsConstructor;\n\npublic class UDAF_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        String sql = \"CREATE TEMPORARY FUNCTION test_hive_udf as 'flink.examples.sql._09.udf._04_udf.UDAF_Test$CollectList2';\\n\"\n                + \"CREATE TEMPORARY FUNCTION to_json_udf as 'flink.examples.sql._09.udf._04_udf.UDAF_Test$ToJson';\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `params` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'user_defined',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'class.name' = 'flink.examples.sql._09.udf._02_stream_hive_udf.UserDefinedSource'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    `log_id` STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"insert into sink_table\\n\"\n                + \"select user_id,\\n\"\n//                + \"       to_json_udf(test_hive_udf(params, cast(0 as int), cast('a' as string), cast(0 as bigint))) as log_id\\n\"\n                + \"       to_json_udf(test_hive_udf(params)) as log_id\\n\"\n                + \"from source_table\\n\"\n                + \"group by user_id\\n\";\n\n        flinkEnv.streamTEnv().getConfig().getConfiguration().setString(\"pipeline.name\", \"UDAF 测试案例\");\n\n        for (String innerSql : sql.split(\";\")) {\n\n            flinkEnv.streamTEnv().executeSql(innerSql);\n        }\n\n    }\n\n    @Data\n    @AllArgsConstructor\n    @NoArgsConstructor\n    public static class Sentence implements Comparable<Sentence> {\n        private String msgid;\n        private Integer type;\n        private String content;\n        private Long ts;\n\n        public int compareTo(Sentence s) {\n            return s.equals(this) ? 1 : 0;\n        }\n    }\n\n    public static class CollectList1 extends AggregateFunction<Sentence, Sentence> {\n\n        @Override\n        public Sentence getValue(Sentence strings) {\n            return new Sentence();\n        }\n\n        @Override\n        public Sentence createAccumulator() {\n            return new Sentence();\n        }\n\n        public void accumulate(Sentence list, String msgid, Integer type, String content, Long ts) {\n\n        }\n\n        public void merge(Sentence list, Iterable<Sentence> it) {\n        }\n\n//        @Override\n//        public TypeInformation<Sentence> getAccumulatorType() {\n//\n//            return Types.POJO(Sentence.class);\n//\n//        }\n//\n//        @Override\n//        public TypeInformation<Sentence> getResultType() {\n//\n//            return Types.POJO(Sentence.class);\n//        }\n    }\n\n\n    public static class CollectList extends AggregateFunction<List<Sentence>, List<Sentence>> {\n\n        @Override\n        public List<Sentence> getValue(List<Sentence> strings) {\n            return strings;\n        }\n\n        @Override\n        public List<Sentence> createAccumulator() {\n            return new ArrayList<>();\n        }\n\n        public void accumulate(List<Sentence> list, String msgid, Integer type, String content, Long ts) {\n            list.add(new Sentence(msgid, type, content, ts));\n        }\n\n        public void merge(List<Sentence> list, Iterable<List<Sentence>> it) {\n            for (List<Sentence> list1 : it) {\n                list.addAll(list1);\n            }\n        }\n\n        @Override\n        public TypeInformation<List<Sentence>> getAccumulatorType() {\n\n            return TypeInformation.of(new TypeHint<List<Sentence>>() {\n            });\n\n        }\n\n        @Override\n        public TypeInformation<List<Sentence>> getResultType() {\n\n            return TypeInformation.of(new TypeHint<List<Sentence>>() {\n            });\n        }\n    }\n\n    public static class ToJson extends ScalarFunction {\n        public String eval(List<String> in) {\n            return JacksonUtils.bean2Json(in);\n        }\n    }\n\n    /**\n     * Set Aggregate\n     * @author Liu Yang\n     * @date 2022/3/28 16:46\n     */\n    @FunctionHint(\n            input = {@DataTypeHint(\"STRING\")},\n            output = @DataTypeHint(\"STRING\")\n    )\n    public static class CollectList2 extends AggregateFunction<String, TreeSetAccumulator> {\n\n        private String delimiter;\n\n        public void accumulate(TreeSetAccumulator acc, String value){\n            if (value == null) {\n                return;\n            }\n            if (value instanceof Comparable) {\n                acc.add((String) value);\n            }\n        }\n\n        @Override\n        public String getValue(TreeSetAccumulator accumulator) {\n            return JacksonUtils.bean2Json(accumulator.getLocalValue());\n        }\n\n        @Override\n        public TreeSetAccumulator createAccumulator() {\n            return new TreeSetAccumulator<>();\n        }\n    }\n\n    public static class TreeSetAccumulator<T extends Comparable<?>>\n            implements Accumulator<T, TreeSet<T>> {\n        private static final long serialVersionUID = 1L;\n\n        // Tips: Construction of sorted collection with non-comparable elements\n        private TreeSet<T> localValue = new TreeSet<>();\n\n        @Override\n        public void add(T value) {\n            localValue.add(value);\n        }\n\n        @Override\n        public TreeSet<T> getLocalValue() {\n            return localValue;\n        }\n\n        @Override\n        public void resetLocal() {\n            localValue.clear();\n        }\n\n        @Override\n        public void merge(Accumulator<T, TreeSet<T>> other) {\n            localValue.addAll(other.getLocalValue());\n        }\n\n        @Override\n        public Accumulator<T, TreeSet<T>> clone() {\n            TreeSetAccumulator<T> newInstance = new TreeSetAccumulator<T>();\n            newInstance.localValue = new TreeSet<>(localValue);\n            return newInstance;\n        }\n\n        @Override\n        public String toString() {\n            return \"TreeSet Accumulator \" + localValue;\n        }\n    }\n\n\n\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/ExplodeUDTF.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Set;\n\nimport org.apache.flink.table.annotation.DataTypeHint;\nimport org.apache.flink.table.functions.TableFunction;\n\n\npublic class ExplodeUDTF extends TableFunction<String> {\n\n    public void eval(@DataTypeHint(\"RAW\") Object test) {\n\n        Set<String> test1 = (Set<String>) test;\n\n        for (String t : test1) {\n            collect(t);\n        }\n    }\n\n}\n\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/ExplodeUDTFV2.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport org.apache.flink.table.functions.TableFunction;\n\n\npublic class ExplodeUDTFV2 extends TableFunction<String[]> {\n\n    public void eval(String worlds) {\n\n        collect(new String[]{ worlds, worlds + \"111\"});\n        collect(new String[]{ worlds, worlds + \"222\"});\n    }\n\n}\n\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/GetMapValue.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Map;\n\nimport org.apache.flink.table.annotation.DataTypeHint;\nimport org.apache.flink.table.functions.ScalarFunction;\n\npublic class GetMapValue extends ScalarFunction {\n\n    public String eval(@DataTypeHint(\"RAW\") Object map, String key) {\n\n        Map<String, Object> innerMap = (Map<String, Object>) map;\n        try {\n            Object obj = innerMap.get(key);\n            if (obj != null) {\n                return obj.toString();\n            } else {\n                return null;\n            }\n        } catch (Exception e) {\n            return null;\n        }\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/GetSetValue.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Set;\n\nimport org.apache.flink.table.annotation.DataTypeHint;\nimport org.apache.flink.table.functions.ScalarFunction;\n\npublic class GetSetValue extends ScalarFunction {\n\n    public String eval(@DataTypeHint(\"RAW\") Object set) {\n\n        Set<String> s = (Set<String>) set;\n\n        return s.iterator().next();\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/ScalarFunctionTest.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class ScalarFunctionTest {\n\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().createFunction(\"set_string\", SetStringUDF.class);\n        flinkEnv.streamTEnv().createFunction(\"explode_udtf\", ExplodeUDTF.class);\n        flinkEnv.streamTEnv().createFunction(\"get_map_value\", GetMapValue.class);\n\n        String sql = \"CREATE TABLE Orders (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE target_table (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time timestamp(3),\\n\"\n                + \"    name_explode STRING,\\n\"\n                + \"    i STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO target_table\\n\"\n                + \"SELECT *, cast(get_map_value(name_explode, cast('a' as string)) as string) as i\\n\"\n                + \"FROM Orders\\n\"\n                + \"LEFT JOIN lateral TABLE(\\n\"\n                + \"        explode_udtf(\\n\"\n                + \"          set_string(name)\\n\"\n                + \"        )\\n\"\n                + \"      ) AS t(name_explode) ON TRUE\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/ScalarFunctionTest2.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class ScalarFunctionTest2 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().createFunction(\"set_string\", SetStringUDF.class);\n        flinkEnv.streamTEnv().createFunction(\"explode_udtf\", ExplodeUDTF.class);\n        flinkEnv.streamTEnv().createFunction(\"get_map_value\", GetMapValue.class);\n        flinkEnv.streamTEnv().createFunction(\"get_set_value\", GetSetValue.class);\n\n        String sql = \"CREATE TABLE Orders (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE target_table (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time timestamp(3),\\n\"\n                + \"    i STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO target_table\\n\"\n                + \"SELECT *, cast(get_set_value(set_string(name)) as string) as i\\n\"\n                + \"FROM Orders\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/SetStringUDF.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Set;\n\nimport org.apache.flink.api.common.typeinfo.TypeHint;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.table.annotation.DataTypeHint;\nimport org.apache.flink.table.functions.ScalarFunction;\n\nimport com.google.common.collect.Sets;\n\n\npublic class SetStringUDF extends ScalarFunction {\n\n    @DataTypeHint(\"RAW\")\n    public Object eval(String input) {\n        return Sets.newHashSet(input, input + \"_1\", input + \"_2\");\n    }\n\n    @Override\n    public TypeInformation<?> getResultType(Class<?>[] signature) {\n        return TypeInformation.of(new TypeHint<Set<String>>() {\n        });\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_09/udf/_05_scalar_function/TableFunctionTest2.java",
    "content": "package flink.examples.sql._09.udf._05_scalar_function;\n\nimport java.util.Arrays;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class TableFunctionTest2 {\n\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.streamTEnv().createFunction(\"explode_udtf_v2\", ExplodeUDTFV2.class);\n\n        String sql = \"CREATE TABLE Orders (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time AS cast(CURRENT_TIMESTAMP as timestamp(3)),\\n\"\n                + \"    WATERMARK FOR row_time AS row_time - INTERVAL '5' SECOND\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '10',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.order_id.min' = '1',\\n\"\n                + \"  'fields.order_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE target_table (\\n\"\n                + \"    order_id BIGINT NOT NULL,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    row_time timestamp(3),\\n\"\n                + \"    i STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"INSERT INTO target_table\\n\"\n                + \"SELECT order_id, name, row_time,  name_explode[2] as i\\n\"\n                + \"FROM Orders \\n\"\n                + \"LEFT JOIN lateral TABLE(explode_udtf_v2(name)) AS t(name_explode) ON TRUE\\n\";\n\n        Arrays.stream(sql.split(\";\"))\n                .forEach(flinkEnv.streamTEnv()::executeSql);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_10_share/A.java",
    "content": "package flink.examples.sql._10_share;\n\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.TimeCharacteristic;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.EnvironmentSettings;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.TableResult;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.types.Row;\n\npublic class A {\n\n    public static void main(String[] args) throws Exception {\n\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        env.setParallelism(1);\n\n        EnvironmentSettings settings = EnvironmentSettings\n                .newInstance()\n                .useBlinkPlanner()\n                .inStreamingMode().build();\n\n        env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);\n\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);\n\n        DataStream<Row> source = env.addSource(new UserDefinedSource());\n\n        Table sourceTable = tEnv.fromDataStream(source, \"stat_date,\\n\"\n                + \"  order_id,\\n\"\n                + \"  buyer_id,\\n\"\n                + \"  seller_id,\\n\"\n                + \"  buy_amount,\\n\"\n                + \"  div_pay_amt\");\n\n        tEnv.createTemporaryView(\"dwd_tb_trd_ord_ent_di_m\", sourceTable);\n\n        String sql = \"CREATE TABLE dimTable (\\n\"\n                + \"    name STRING,\\n\"\n                + \"    name1 STRING,\\n\"\n                + \"    score BIGINT\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'redis',\\n\"\n                + \"  'hostname' = '127.0.0.1',\\n\"\n                + \"  'port' = '6379',\\n\"\n                + \"  'format' = 'json',\\n\"\n                + \"  'lookup.cache.max-rows' = '500',\\n\"\n                + \"  'lookup.cache.ttl' = '3600',\\n\"\n                + \"  'lookup.max-retries' = '1'\\n\"\n                + \")\";\n\n        String joinSql = \"SELECT o.f0, o.f1, c.name, c.name1, c.score\\n\"\n                + \"FROM leftTable AS o\\n\"\n                + \"LEFT JOIN dimTable FOR SYSTEM_TIME AS OF o.proctime AS c\\n\"\n                + \"ON o.f0 = c.name\";\n\n        TableResult dimTable = tEnv.executeSql(sql);\n\n        Table t = tEnv.sqlQuery(joinSql);\n\n        //        Table t = tEnv.sqlQuery(\"select * from leftTable\");\n\n        tEnv.toAppendStream(t, Row.class).print();\n\n        env.execute();\n    }\n\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\", \"b\", 1L));\n\n                Thread.sleep(10L);\n            }\n\n        }\n\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(TypeInformation.of(String.class), TypeInformation.of(String.class),\n                    TypeInformation.of(Long.class));\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_11_explain/Explain_Test.java",
    "content": "package flink.examples.sql._11_explain;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Explain_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT COMMENT '用户 id',\\n\"\n                + \"    name STRING COMMENT '用户姓名',\\n\"\n                + \"    server_timestamp BIGINT COMMENT '用户访问时间戳',\\n\"\n                + \"    proctime AS PROCTIME()\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.name.length' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10',\\n\"\n                + \"  'fields.server_timestamp.min' = '1',\\n\"\n                + \"  'fields.server_timestamp.max' = '100000'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    user_id BIGINT,\\n\"\n                + \"    name STRING,\\n\"\n                + \"    server_timestamp BIGINT\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"EXPLAIN PLAN FOR\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_id,\\n\"\n                + \"       name,\\n\"\n                + \"       server_timestamp\\n\"\n                + \"from (\\n\"\n                + \"      SELECT\\n\"\n                + \"          user_id,\\n\"\n                + \"          name,\\n\"\n                + \"          server_timestamp,\\n\"\n                + \"          row_number() over(partition by user_id order by proctime) as rn\\n\"\n                + \"      FROM source_table\\n\"\n                + \")\\n\"\n                + \"where rn = 1\";\n\n        /**\n         * 算子 {@link org.apache.flink.streaming.api.operators.KeyedProcessOperator}\n         *      -- {@link org.apache.flink.table.runtime.operators.deduplicate.ProcTimeDeduplicateKeepFirstRowFunction}\n         */\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_01_interval/Timestamp3_Interval_To_Test.java",
    "content": "package flink.examples.sql._12_data_type._01_interval;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Timestamp3_Interval_To_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE TABLE sink_table (\\n\"\n                + \"    result_interval_year TIMESTAMP(3),\\n\"\n                + \"    result_interval_year_p TIMESTAMP(3),\\n\"\n                + \"    result_interval_year_p_to_month TIMESTAMP(3),\\n\"\n                + \"    result_interval_month TIMESTAMP(3),\\n\"\n                + \"    result_interval_day TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1 TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1_to_hour TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1_to_minute TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1_to_second_p2 TIMESTAMP(3),\\n\"\n                + \"    result_interval_hour TIMESTAMP(3),\\n\"\n                + \"    result_interval_hour_to_minute TIMESTAMP(3),\\n\"\n                + \"    result_interval_hour_to_second TIMESTAMP(3),\\n\"\n                + \"    result_interval_minute TIMESTAMP(3),\\n\"\n                + \"    result_interval_minute_to_second_p2 TIMESTAMP(3),\\n\"\n                + \"    result_interval_second TIMESTAMP(3),\\n\"\n                + \"    result_interval_second_p2 TIMESTAMP(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    f1 + INTERVAL '10' YEAR as result_interval_year\\n\"\n                + \"    , f1 + INTERVAL '100' YEAR(3) as result_interval_year_p\\n\"\n                + \"    , f1 + INTERVAL '10-03' YEAR(3) TO MONTH as result_interval_year_p_to_month\\n\"\n                + \"    , f1 + INTERVAL '13' MONTH as result_interval_month\\n\"\n                + \"    , f1 + INTERVAL '10' DAY as result_interval_day\\n\"\n                + \"    , f1 + INTERVAL '100' DAY(3) as result_interval_day_p1\\n\"\n                + \"    , f1 + INTERVAL '10 03' DAY(3) TO HOUR as result_interval_day_p1_to_hour\\n\"\n                + \"    , f1 + INTERVAL '10 03:12' DAY(3) TO MINUTE as result_interval_day_p1_to_minute\\n\"\n                + \"    , f1 + INTERVAL '10 00:00:00.004' DAY TO SECOND(3) as result_interval_day_p1_to_second_p2\\n\"\n                + \"    , f1 + INTERVAL '10' HOUR as result_interval_hour\\n\"\n                + \"    , f1 + INTERVAL '10:03' HOUR TO MINUTE as result_interval_hour_to_minute\\n\"\n                + \"    , f1 + INTERVAL '00:00:00.004' HOUR TO SECOND(3) as result_interval_hour_to_second\\n\"\n                + \"    , f1 + INTERVAL '10' MINUTE as result_interval_minute\\n\"\n                + \"    , f1 + INTERVAL '05:05.006' MINUTE TO SECOND(3) as result_interval_minute_to_second_p2\\n\"\n                + \"    , f1 + INTERVAL '3' SECOND as result_interval_second\\n\"\n                + \"    , f1 + INTERVAL '300' SECOND(3) as result_interval_second_p2\\n\"\n                + \"FROM (SELECT CAST('1990-10-14 10:20:45.123' as TIMESTAMP(3)) as f1)\"\n                ;\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_01_interval/Timestamp_ltz3_Interval_To_Test.java",
    "content": "package flink.examples.sql._12_data_type._01_interval;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Timestamp_ltz3_Interval_To_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        /**\n         * INTERVAL YEAR\n         * INTERVAL YEAR(p)\n         * INTERVAL YEAR(p) TO MONTH\n         * INTERVAL MONTH\n         * INTERVAL DAY\n         * INTERVAL DAY(p1)\n         * INTERVAL DAY(p1) TO HOUR\n         * INTERVAL DAY(p1) TO MINUTE\n         * INTERVAL DAY(p1) TO SECOND(p2)\n         * INTERVAL HOUR\n         * INTERVAL HOUR TO MINUTE\n         * INTERVAL HOUR TO SECOND(p2)\n         * INTERVAL MINUTE\n         * INTERVAL MINUTE TO SECOND(p2)\n         * INTERVAL SECOND\n         * INTERVAL SECOND(p2)\n         */\n\n        String sql = \"CREATE TABLE sink_table (\\n\"\n                + \"    result_interval_year TIMESTAMP(3),\\n\"\n                + \"    result_interval_year_p TIMESTAMP(3),\\n\"\n                + \"    result_interval_year_p_to_month TIMESTAMP(3),\\n\"\n                + \"    result_interval_month TIMESTAMP(3),\\n\"\n                + \"    result_interval_day TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1 TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1_to_hour TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1_to_minute TIMESTAMP(3),\\n\"\n                + \"    result_interval_day_p1_to_second_p2 TIMESTAMP(3),\\n\"\n                + \"    result_interval_hour TIMESTAMP(3),\\n\"\n                + \"    result_interval_hour_to_minute TIMESTAMP(3),\\n\"\n                + \"    result_interval_hour_to_second TIMESTAMP(3),\\n\"\n                + \"    result_interval_minute TIMESTAMP(3),\\n\"\n                + \"    result_interval_minute_to_second_p2 TIMESTAMP(3),\\n\"\n                + \"    result_interval_second TIMESTAMP(3),\\n\"\n                + \"    result_interval_second_p2 TIMESTAMP(3)\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"SELECT\\n\"\n                + \"    f1 + INTERVAL '10' YEAR as result_interval_year\\n\"\n                + \"    , f1 + INTERVAL '100' YEAR(3) as result_interval_year_p\\n\"\n                + \"    , f1 + INTERVAL '10-03' YEAR(3) TO MONTH as result_interval_year_p_to_month\\n\"\n                + \"    , f1 + INTERVAL '13' MONTH as result_interval_month\\n\"\n                + \"    , f1 + INTERVAL '10' DAY as result_interval_day\\n\"\n                + \"    , f1 + INTERVAL '100' DAY(3) as result_interval_day_p1\\n\"\n                + \"    , f1 + INTERVAL '10 03' DAY(3) TO HOUR as result_interval_day_p1_to_hour\\n\"\n                + \"    , f1 + INTERVAL '10 03:12' DAY(3) TO MINUTE as result_interval_day_p1_to_minute\\n\"\n                + \"    , f1 + INTERVAL '10 00:00:00.004' DAY TO SECOND(3) as result_interval_day_p1_to_second_p2\\n\"\n                + \"    , f1 + INTERVAL '10' HOUR as result_interval_hour\\n\"\n                + \"    , f1 + INTERVAL '10:03' HOUR TO MINUTE as result_interval_hour_to_minute\\n\"\n                + \"    , f1 + INTERVAL '00:00:00.004' HOUR TO SECOND(3) as result_interval_hour_to_second\\n\"\n                + \"    , f1 + INTERVAL '10' MINUTE as result_interval_minute\\n\"\n                + \"    , f1 + INTERVAL '05:05.006' MINUTE TO SECOND(3) as result_interval_minute_to_second_p2\\n\"\n                + \"    , f1 + INTERVAL '3' SECOND as result_interval_second\\n\"\n                + \"    , f1 + INTERVAL '300' SECOND(3) as result_interval_second_p2\\n\"\n                + \"FROM (SELECT TO_TIMESTAMP_LTZ(1640966476500, 3) as f1)\"\n                ;\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_02_user_defined/User.java",
    "content": "package flink.examples.sql._12_data_type._02_user_defined;\n\nimport java.math.BigDecimal;\n\nimport org.apache.flink.table.annotation.DataTypeHint;\n\npublic class User {\n\n    public int age;\n    public String name;\n\n    public @DataTypeHint(\"DECIMAL(10, 2)\") BigDecimal totalBalance;\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_02_user_defined/UserDefinedDataTypes_Test.java",
    "content": "package flink.examples.sql._12_data_type._02_user_defined;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class UserDefinedDataTypes_Test {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE FUNCTION user_scalar_func AS 'flink.examples.sql._12_data_type._02_user_defined.UserScalarFunction';\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT NOT NULL COMMENT '用户 id'\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    result_row ROW<age INT, name STRING, totalBalance DECIMAL(10, 2)>\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select user_scalar_func(user_id) as result_row\\n\"\n                + \"from source_table\";\n                ;\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_02_user_defined/UserDefinedDataTypes_Test2.java",
    "content": "package flink.examples.sql._12_data_type._02_user_defined;\n\nimport org.apache.flink.table.api.TableResult;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class UserDefinedDataTypes_Test2 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = \"CREATE FUNCTION user_scalar_func AS 'flink.examples.sql._12_data_type._02_user_defined.UserScalarFunction';\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT NOT NULL COMMENT '用户 id'\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    result_row_1 ROW<age INT, name STRING, totalBalance DECIMAL(10, 2)>,\\n\"\n                + \"    result_row_2 STRING\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select\\n\"\n                + \"    user_scalar_func(user_id) as result_row_1,\\n\"\n                + \"    user_scalar_func(user_scalar_func(user_id)) as result_row_2\\n\"\n                + \"from source_table\";\n                ;\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_02_user_defined/UserScalarFunction.java",
    "content": "package flink.examples.sql._12_data_type._02_user_defined;\n\nimport java.math.BigDecimal;\n\nimport org.apache.flink.table.functions.ScalarFunction;\n\npublic class UserScalarFunction extends ScalarFunction {\n\n    // 1. 自定义数据类型作为输出参数\n    public User eval(long i) {\n        if (i > 0 && i <= 5) {\n            User u = new User();\n            u.age = (int) i;\n            u.name = \"name1\";\n            u.totalBalance = new BigDecimal(1.1d);\n            return u;\n        } else {\n            User u = new User();\n            u.age = (int) i;\n            u.name = \"name2\";\n            u.totalBalance = new BigDecimal(2.2d);\n            return u;\n        }\n    }\n\n    // 2. 自定义数据类型作为输入参数\n    public String eval(User i) {\n        if (i.age > 0 && i.age <= 5) {\n            User u = new User();\n            u.age = 1;\n            u.name = \"name1\";\n            u.totalBalance = new BigDecimal(1.1d);\n            return u.name;\n        } else {\n            User u = new User();\n            u.age = 2;\n            u.name = \"name2\";\n            u.totalBalance = new BigDecimal(2.2d);\n            return u.name;\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_03_raw/RawScalarFunction.java",
    "content": "package flink.examples.sql._12_data_type._03_raw;\n\nimport java.math.BigDecimal;\n\nimport org.apache.flink.api.common.typeutils.base.StringSerializer;\nimport org.apache.flink.table.annotation.DataTypeHint;\nimport org.apache.flink.table.functions.ScalarFunction;\n\nimport flink.examples.sql._12_data_type._02_user_defined.User;\n\npublic class RawScalarFunction extends ScalarFunction {\n\n    // 1. 自定义数据类型作为输出参数\n    public User eval(long i) {\n        if (i > 0 && i <= 5) {\n            User u = new User();\n            u.age = (int) i;\n            u.name = \"name1\";\n            u.totalBalance = new BigDecimal(1.1d);\n            return u;\n        } else {\n            User u = new User();\n            u.age = (int) i;\n            u.name = \"name2\";\n            u.totalBalance = new BigDecimal(2.2d);\n            return u;\n        }\n    }\n\n    // 2. 自定义数据类型作为输入参数、自定义输出类型为 Raw 类型\n    @DataTypeHint(value = \"RAW\", bridgedTo = String.class, rawSerializer = StringSerializer.class)\n    public String eval(User i) {\n        if (i.age > 0 && i.age <= 5) {\n            User u = new User();\n            u.age = 1;\n            u.name = \"name1\";\n            u.totalBalance = new BigDecimal(1.1d);\n            return u.name;\n        } else {\n            User u = new User();\n            u.age = 2;\n            u.name = \"name2\";\n            u.totalBalance = new BigDecimal(2.2d);\n            return u.name;\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/java/flink/examples/sql/_12_data_type/_03_raw/Raw_DataTypes_Test2.java",
    "content": "package flink.examples.sql._12_data_type._03_raw;\n\nimport org.apache.flink.api.common.typeutils.base.StringSerializer;\nimport org.apache.flink.table.api.TableResult;\nimport org.apache.flink.table.types.logical.RawType;\n\nimport flink.examples.FlinkEnvUtils;\nimport flink.examples.FlinkEnvUtils.FlinkEnv;\n\n\npublic class Raw_DataTypes_Test2 {\n\n    public static void main(String[] args) throws Exception {\n\n        FlinkEnv flinkEnv = FlinkEnvUtils.getStreamTableEnv(args);\n\n        RawType rawType = new RawType(String.class, StringSerializer.INSTANCE);\n\n        String base64String = rawType.getSerializerString();\n\n        flinkEnv.env().setParallelism(1);\n\n        String sql = String.format(\"CREATE FUNCTION raw_scalar_func AS 'flink.examples.sql._12_data_type._03_raw.RawScalarFunction';\"\n                + \"\\n\"\n                + \"CREATE TABLE source_table (\\n\"\n                + \"    user_id BIGINT NOT NULL COMMENT '用户 id'\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'datagen',\\n\"\n                + \"  'rows-per-second' = '1',\\n\"\n                + \"  'fields.user_id.min' = '1',\\n\"\n                + \"  'fields.user_id.max' = '10'\\n\"\n                + \");\\n\"\n                + \"\\n\"\n                + \"CREATE TABLE sink_table (\\n\"\n                + \"    result_row_1 RAW('java.lang.String', '%s')\\n\"\n                + \") WITH (\\n\"\n                + \"  'connector' = 'print'\\n\"\n                + \");\"\n                + \"\\n\"\n                + \"INSERT INTO sink_table\\n\"\n                + \"select\\n\"\n                + \"    raw_scalar_func(raw_scalar_func(user_id)) as result_row_1\\n\"\n                + \"from source_table\", base64String);\n                ;\n\n        for (String innerSql : sql.split(\";\")) {\n            TableResult tableResult = flinkEnv.streamTEnv().executeSql(innerSql);\n\n            tableResult.print();\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/main/javacc/Simple1.jj",
    "content": "/* Copyright (c) 2006, Sun Microsystems, Inc.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *     * Redistributions of source code must retain the above copyright notice,\n *       this list of conditions and the following disclaimer.\n *     * Redistributions in binary form must reproduce the above copyright\n *       notice, this list of conditions and the following disclaimer in the\n *       documentation and/or other materials provided with the distribution.\n *     * Neither the name of the Sun Microsystems, Inc. nor the names of its\n *       contributors may be used to endorse or promote products derived from\n *       this software without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n * THE POSSIBILITY OF SUCH DAMAGE.\n */\n\n\noptions {\n  LOOKAHEAD = 1;\n  CHOICE_AMBIGUITY_CHECK = 2;\n  OTHER_AMBIGUITY_CHECK = 1;\n  STATIC = true;\n  DEBUG_PARSER = false;\n  DEBUG_LOOKAHEAD = false;\n  DEBUG_TOKEN_MANAGER = false;\n  ERROR_REPORTING = true;\n  JAVA_UNICODE_ESCAPE = false;\n  UNICODE_INPUT = false;\n  IGNORE_CASE = false;\n  USER_TOKEN_MANAGER = false;\n  USER_CHAR_STREAM = false;\n  BUILD_PARSER = true;\n  BUILD_TOKEN_MANAGER = true;\n  SANITY_CHECK = true;\n  FORCE_LA_CHECK = false;\n}\n\nPARSER_BEGIN(Simple1)\n\n/** Simple brace matcher. */\npublic class Simple1 {\n\n  /** Main entry point. */\n  public static void main(String args[]) throws ParseException {\n    Simple1 parser = new Simple1(System.in);\n    parser.Input();\n  }\n\n}\n\nPARSER_END(Simple1)\n\n/** Root production. */\nvoid Input() :\n{}\n{\n  MatchedBraces() (\"\\n\"|\"\\r\")* <EOF>\n}\n\n/** Brace matching production. */\nvoid MatchedBraces() :\n{}\n{\n  \"{\" [ MatchedBraces() ] \"}\"\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/proto/source.proto",
    "content": "syntax = \"proto3\";\n\npackage flink;\n\noption java_package = \"flink.examples.datastream._04.keyed_co_process.protobuf\";\noption java_outer_classname = \"SourceOuterClassname\";\noption java_multiple_files = true;\n\nmessage Source {\n    string name = 1;\n    repeated string names = 2;\n\n    map<string, int32> si_map = 7;\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/proto/test.proto",
    "content": "syntax = \"proto3\";\n\npackage flink;\n\noption java_package = \"flink.examples.sql._05.format.formats.protobuf\";\noption java_outer_classname = \"TestOuterClassname\";\noption java_multiple_files = true;\n\nmessage Test {\n    string name = 1;\n    repeated string names = 2;\n\n    map<string, int32> si_map = 7;\n}"
  },
  {
    "path": "flink-examples-1.13/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nflink.examples.sql._05.format.formats.csv.ChangelogCsvFormatFactory\nflink.examples.sql._03.source_sink.table.socket.SocketDynamicTableFactory\nflink.examples.sql._03.source_sink.table.redis.v2.RedisDynamicTableFactory\nflink.examples.sql._03.source_sink.table.user_defined.UserDefinedDynamicTableFactory\nflink.examples.sql._03.source_sink.abilities.source.Abilities_TableSourceFactory\nflink.examples.sql._03.source_sink.abilities.source.before.Before_Abilities_TableSourceFactory\nflink.examples.sql._03.source_sink.abilities.sink.Abilities_TableSinkFactory\nflink.examples.sql._05.format.formats.protobuf.rowdata.ProtobufFormatFactory"
  },
  {
    "path": "flink-examples-1.13/src/main/scala/flink/examples/sql/_04/type/TableFunc0.scala",
    "content": "package flink.examples.sql._04.`type`\n\nimport org.apache.flink.table.functions.TableFunction\n\n\ncase class SimpleUser(name: String, age: Int)\n\nclass TableFunc0 extends TableFunction[SimpleUser] {\n\n  // make sure input element's format is \"<string&gt#<int>\"\n\n  def eval(user: String): Unit = {\n\n    if (user.contains(\"#\")) {\n\n      val splits = user.split(\"#\")\n\n      collect(SimpleUser(splits(0), splits(1).toInt))\n\n    }\n  }\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/test/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufRowDeserializationSchemaTest.java",
    "content": "//package flink.examples.sql._05.format.formats.protobuf.row;\n//\n//import java.io.ByteArrayInputStream;\n//import java.io.ByteArrayOutputStream;\n//import java.io.File;\n//import java.io.FileInputStream;\n//import java.io.IOException;\n//import java.io.ObjectInputStream;\n//import java.io.ObjectOutputStream;\n//import java.util.HashMap;\n//\n//import org.apache.flink.types.Row;\n//import org.junit.Assert;\n//import org.junit.Before;\n//import org.junit.Test;\n//\n//import com.google.common.collect.Lists;\n//\n//import flink.examples.sql._05.format.formats.protobuf.Dog;\n//import flink.examples.sql._05.format.formats.protobuf.Person;\n//import flink.examples.sql._05.format.formats.protobuf.Person.Contact;\n//import flink.examples.sql._05.format.formats.protobuf.Person.ContactType;\n//\n//public class ProtobufRowDeserializationSchemaTest {\n//\n//    private Person p;\n//\n//    private byte[] b;\n//\n//    private static final String PROTO_DESCRIPTOR_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --descriptor_set_out=./person.desc ./src/test/proto/person.proto\";\n//\n//    private static final String PROTO_JAVA_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --java_out=./ ./src/test/proto/person.proto\";\n//\n//    @Before\n//    public void initPerson() throws IOException, InterruptedException {\n//        this.p = Person\n//                .newBuilder()\n//                .setName(\"name\")\n//                .addAllNames(Lists.newArrayList(\"name1\", \"name2\"))\n//                .setId(1)\n//                .addAllIds(Lists.newArrayList(2, 3))\n//                .setLong(4L)\n//                .addAllLongs(Lists.newArrayList(5L, 6L))\n//                .putAllSiMap(new HashMap<String, Integer>() {\n//                    {\n//                        put(\"key1\", 7);\n//                    }\n//                })\n//                .putAllSlMap(new HashMap<String, Long>() {\n//                    {\n//                        put(\"key2\", 8L);\n//                    }\n//                })\n//                .putAllSdMap(new HashMap<String, Dog>() {\n//                    {\n//                        put(\"key3\", Dog.newBuilder().setId(9).setName(\"dog1\").build());\n//                    }\n//                })\n//                .setDog(Dog.newBuilder().setId(10).setName(\"dog2\").build())\n//                .addAllDogs(Lists.newArrayList(Dog.newBuilder().setId(11).setName(\"dog3\").build()))\n//                .addAllContacts(Lists.newArrayList(\n//                        Contact.newBuilder().setNumber(\"number\").setContactType(ContactType.EMAIL).build()))\n//                .build();\n//\n//        this.b = this.p.toByteArray();\n//\n//        String[] cmds = {\"bash\", \"-c\", PROTO_DESCRIPTOR_FILE_GENERATOR_CMD};\n//        Process process = Runtime.getRuntime().exec(cmds, null, new File(\"./\"));\n//\n//        int exitCode = process.waitFor();\n//\n//\n//    }\n//\n//    @Test\n//    public void deserializationProtobufToRowTest() throws IOException {\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(Person.class);\n//\n//        Row row = ds.deserialize(this.b);\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(Person.class);\n//\n//        byte[] b = s.serialize(row);\n//\n//        Assert.assertArrayEquals(this.b, b);\n//\n//    }\n//\n//    @Test\n//    public void deserializationProtobufToRowByDescriptorTest() throws IOException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(descriptorBytes);\n//\n//        Row row = ds.deserialize(this.b);\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(descriptorBytes);\n//\n//        byte[] b = s.serialize(row);\n//\n//        Assert.assertArrayEquals(this.b, b);\n//\n//    }\n//\n//    @Test\n//    public void seAndDeseProtobufRowDeserializationSchema() throws IOException, ClassNotFoundException {\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(Person.class);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(ds);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//    @Test\n//    public void seAndDeseProtobufRowDeserializationSchemaByDescriptor() throws IOException, ClassNotFoundException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(descriptorBytes);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(ds);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/test/java/flink/examples/sql/_05/format/formats/protobuf/row/ProtobufRowSerializationSchemaTest.java",
    "content": "//package flink.examples.sql._05.format.formats.protobuf.row;\n//\n//import java.io.ByteArrayInputStream;\n//import java.io.ByteArrayOutputStream;\n//import java.io.File;\n//import java.io.FileInputStream;\n//import java.io.IOException;\n//import java.io.ObjectInputStream;\n//import java.io.ObjectOutputStream;\n//import java.util.HashMap;\n//\n//import org.apache.flink.types.Row;\n//import org.junit.Assert;\n//import org.junit.Before;\n//import org.junit.Test;\n//\n//import com.google.common.collect.Lists;\n//\n//import flink.examples.sql._05.format.formats.protobuf.Dog;\n//import flink.examples.sql._05.format.formats.protobuf.Person;\n//import flink.examples.sql._05.format.formats.protobuf.Person.Contact;\n//import flink.examples.sql._05.format.formats.protobuf.Person.ContactType;\n//\n//public class ProtobufRowSerializationSchemaTest {\n//\n//    private Person p;\n//\n//    private byte[] b;\n//\n//    private Row r;\n//\n//    private static final String PROTO_DESCRIPTOR_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --descriptor_set_out=./person.desc ./src/test/proto/person.proto\";\n//\n//    private static final String PROTO_JAVA_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --java_out=./ ./src/test/proto/person.proto\";\n//\n//    @Before\n//    public void initPerson() throws IOException, InterruptedException {\n//        this.p = Person\n//                .newBuilder()\n//                .setName(\"name\")\n//                .addAllNames(Lists.newArrayList(\"name1\", \"name2\"))\n//                .setId(1)\n//                .addAllIds(Lists.newArrayList(2, 3))\n//                .setLong(4L)\n//                .addAllLongs(Lists.newArrayList(5L, 6L))\n//                .putAllSiMap(new HashMap<String, Integer>() {\n//                    {\n//                        put(\"key1\", 7);\n//                    }\n//                })\n//                .putAllSlMap(new HashMap<String, Long>() {\n//                    {\n//                        put(\"key2\", 8L);\n//                    }\n//                })\n//                .putAllSdMap(new HashMap<String, Dog>() {\n//                    {\n//                        put(\"key3\", Dog.newBuilder().setId(9).setName(\"dog1\").build());\n//                    }\n//                })\n//                .setDog(Dog.newBuilder().setId(10).setName(\"dog2\").build())\n//                .addAllDogs(Lists.newArrayList(Dog.newBuilder().setId(11).setName(\"dog3\").build()))\n//                .addAllContacts(Lists.newArrayList(\n//                        Contact.newBuilder().setNumber(\"number\").setContactType(ContactType.EMAIL).build()))\n//                .build();\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(Person.class);\n//\n//        this.r = ds.deserialize(this.p.toByteArray());\n//\n//        this.b = this.p.toByteArray();\n//\n//        String[] cmds = {\"bash\", \"-c\", PROTO_DESCRIPTOR_FILE_GENERATOR_CMD};\n//        Process process = Runtime.getRuntime().exec(cmds, null, new File(\"./\"));\n//\n//        int exitCode = process.waitFor();\n//    }\n//\n//    @Test\n//    public void serializationRowToProtobufTest() throws IOException {\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(Person.class);\n//\n//        byte[] b = s.serialize(this.r);\n//\n//        Person p1 = Person.parseFrom(b);\n//\n//        Assert.assertEquals(p1, this.p);\n//\n//    }\n//\n//\n//    @Test\n//    public void serializationRowToProtobufByDescriptorTest() throws IOException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(descriptorBytes);\n//\n//        byte[] b = s.serialize(this.r);\n//\n//        Person p1 = Person.parseFrom(b);\n//\n//        Assert.assertEquals(p1, this.p);\n//\n//    }\n//\n//\n//    @Test\n//    public void seAndDeseProtobufRowerializationSchema() throws IOException, ClassNotFoundException {\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(Person.class);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(s);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//\n//    @Test\n//    public void seAndDeseProtobufRowSerializationSchemaByDescriptor() throws IOException, ClassNotFoundException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowSerializationSchema ds = new ProtobufRowSerializationSchema(descriptorBytes);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(ds);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/test/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufRowDataDeserializationSchemaTest.java",
    "content": "//package flink.examples.sql._05.format.formats.protobuf.rowdata;\n//\n//import java.io.ByteArrayInputStream;\n//import java.io.ByteArrayOutputStream;\n//import java.io.File;\n//import java.io.FileInputStream;\n//import java.io.IOException;\n//import java.io.ObjectInputStream;\n//import java.io.ObjectOutputStream;\n//import java.util.HashMap;\n//\n//import org.apache.flink.table.data.RowData;\n//import org.apache.flink.types.Row;\n//import org.junit.Assert;\n//import org.junit.Before;\n//import org.junit.Test;\n//\n//import com.google.common.collect.Lists;\n//\n//import flink.examples.sql._05.format.formats.protobuf.Dog;\n//import flink.examples.sql._05.format.formats.protobuf.Person;\n//import flink.examples.sql._05.format.formats.protobuf.Person.Contact;\n//import flink.examples.sql._05.format.formats.protobuf.Person.ContactType;\n//import flink.examples.sql._05.format.formats.protobuf.row.ProtobufRowDeserializationSchema;\n//import flink.examples.sql._05.format.formats.protobuf.row.ProtobufRowSerializationSchema;\n//\n//public class ProtobufRowDataDeserializationSchemaTest {\n//\n//    private Person p;\n//\n//    private byte[] b;\n//\n//    private static final String PROTO_DESCRIPTOR_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --descriptor_set_out=./person.desc ./src/test/proto/person.proto\";\n//\n//    private static final String PROTO_JAVA_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --java_out=./ ./src/test/proto/person.proto\";\n//\n//    @Before\n//    public void initPerson() throws IOException, InterruptedException {\n//        this.p = Person\n//                .newBuilder()\n//                .setName(\"name\")\n//                .addAllNames(Lists.newArrayList(\"name1\", \"name2\"))\n//                .setId(1)\n//                .addAllIds(Lists.newArrayList(2, 3))\n//                .setLong(4L)\n//                .addAllLongs(Lists.newArrayList(5L, 6L))\n//                .putAllSiMap(new HashMap<String, Integer>() {\n//                    {\n//                        put(\"key1\", 7);\n//                    }\n//                })\n//                .putAllSlMap(new HashMap<String, Long>() {\n//                    {\n//                        put(\"key2\", 8L);\n//                    }\n//                })\n//                .putAllSdMap(new HashMap<String, Dog>() {\n//                    {\n//                        put(\"key3\", Dog.newBuilder().setId(9).setName(\"dog1\").build());\n//                    }\n//                })\n//                .setDog(Dog.newBuilder().setId(10).setName(\"dog2\").build())\n//                .addAllDogs(Lists.newArrayList(Dog.newBuilder().setId(11).setName(\"dog3\").build()))\n//                .addAllContacts(Lists.newArrayList(\n//                        Contact.newBuilder().setNumber(\"number\").setContactType(ContactType.EMAIL).build()))\n//                .build();\n//\n//        this.b = this.p.toByteArray();\n//\n//        String[] cmds = {\"bash\", \"-c\", PROTO_DESCRIPTOR_FILE_GENERATOR_CMD};\n//        Process process = Runtime.getRuntime().exec(cmds, null, new File(\"./\"));\n//\n//        int exitCode = process.waitFor();\n//\n//\n//    }\n//\n//    @Test\n//    public void deserializationProtobufToRowTest() throws IOException {\n//\n//        ProtobufRowDataDeserializationSchema ds = new ProtobufRowDataDeserializationSchema(\n//                Person.class\n//                , true\n//                , null);\n//\n//        RowData rowData = ds.deserialize(this.b);\n//\n//        Assert.assertArrayEquals(this.b, b);\n//\n//    }\n//\n//    @Test\n//    public void deserializationProtobufToRowByDescriptorTest() throws IOException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(descriptorBytes);\n//\n//        Row row = ds.deserialize(this.b);\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(descriptorBytes);\n//\n//        byte[] b = s.serialize(row);\n//\n//        Assert.assertArrayEquals(this.b, b);\n//\n//    }\n//\n//    @Test\n//    public void seAndDeseProtobufRowDeserializationSchema() throws IOException, ClassNotFoundException {\n//\n//\n//    }\n//\n//    @Test\n//    public void seAndDeseProtobufRowDeserializationSchemaByDescriptor() throws IOException, ClassNotFoundException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(descriptorBytes);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(ds);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/test/java/flink/examples/sql/_05/format/formats/protobuf/rowdata/ProtobufRowDataSerializationSchemaTest.java",
    "content": "//package flink.examples.sql._05.format.formats.protobuf.rowdata;\n//\n//import java.io.ByteArrayInputStream;\n//import java.io.ByteArrayOutputStream;\n//import java.io.File;\n//import java.io.FileInputStream;\n//import java.io.IOException;\n//import java.io.ObjectInputStream;\n//import java.io.ObjectOutputStream;\n//import java.util.HashMap;\n//\n//import org.apache.flink.types.Row;\n//import org.junit.Assert;\n//import org.junit.Before;\n//import org.junit.Test;\n//\n//import com.google.common.collect.Lists;\n//\n//import flink.examples.sql._05.format.formats.protobuf.Dog;\n//import flink.examples.sql._05.format.formats.protobuf.Person;\n//import flink.examples.sql._05.format.formats.protobuf.Person.Contact;\n//import flink.examples.sql._05.format.formats.protobuf.Person.ContactType;\n//import flink.examples.sql._05.format.formats.protobuf.row.ProtobufRowDeserializationSchema;\n//import flink.examples.sql._05.format.formats.protobuf.row.ProtobufRowSerializationSchema;\n//\n//public class ProtobufRowDataSerializationSchemaTest {\n//\n//    private Person p;\n//\n//    private byte[] b;\n//\n//    private Row r;\n//\n//    private static final String PROTO_DESCRIPTOR_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --descriptor_set_out=./person.desc ./src/test/proto/person.proto\";\n//\n//    private static final String PROTO_JAVA_FILE_GENERATOR_CMD =\n//            \"protoc --proto_path ./src/test/proto --java_out=./ ./src/test/proto/person.proto\";\n//\n//    @Before\n//    public void initPerson() throws IOException, InterruptedException {\n//        this.p = Person\n//                .newBuilder()\n//                .setName(\"name\")\n//                .addAllNames(Lists.newArrayList(\"name1\", \"name2\"))\n//                .setId(1)\n//                .addAllIds(Lists.newArrayList(2, 3))\n//                .setLong(4L)\n//                .addAllLongs(Lists.newArrayList(5L, 6L))\n//                .putAllSiMap(new HashMap<String, Integer>() {\n//                    {\n//                        put(\"key1\", 7);\n//                    }\n//                })\n//                .putAllSlMap(new HashMap<String, Long>() {\n//                    {\n//                        put(\"key2\", 8L);\n//                    }\n//                })\n//                .putAllSdMap(new HashMap<String, Dog>() {\n//                    {\n//                        put(\"key3\", Dog.newBuilder().setId(9).setName(\"dog1\").build());\n//                    }\n//                })\n//                .setDog(Dog.newBuilder().setId(10).setName(\"dog2\").build())\n//                .addAllDogs(Lists.newArrayList(Dog.newBuilder().setId(11).setName(\"dog3\").build()))\n//                .addAllContacts(Lists.newArrayList(\n//                        Contact.newBuilder().setNumber(\"number\").setContactType(ContactType.EMAIL).build()))\n//                .build();\n//\n//        ProtobufRowDeserializationSchema ds = new ProtobufRowDeserializationSchema(Person.class);\n//\n//        this.r = ds.deserialize(this.p.toByteArray());\n//\n//        this.b = this.p.toByteArray();\n//\n//        String[] cmds = {\"bash\", \"-c\", PROTO_DESCRIPTOR_FILE_GENERATOR_CMD};\n//        Process process = Runtime.getRuntime().exec(cmds, null, new File(\"./\"));\n//\n//        int exitCode = process.waitFor();\n//    }\n//\n//    @Test\n//    public void serializationRowToProtobufTest() throws IOException {\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(Person.class);\n//\n//        byte[] b = s.serialize(this.r);\n//\n//        Person p1 = Person.parseFrom(b);\n//\n//        Assert.assertEquals(p1, this.p);\n//\n//    }\n//\n//\n//    @Test\n//    public void serializationRowToProtobufByDescriptorTest() throws IOException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(descriptorBytes);\n//\n//        byte[] b = s.serialize(this.r);\n//\n//        Person p1 = Person.parseFrom(b);\n//\n//        Assert.assertEquals(p1, this.p);\n//\n//    }\n//\n//\n//    @Test\n//    public void seAndDeseProtobufRowerializationSchema() throws IOException, ClassNotFoundException {\n//\n//        ProtobufRowSerializationSchema s = new ProtobufRowSerializationSchema(Person.class);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(s);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//\n//    @Test\n//    public void seAndDeseProtobufRowSerializationSchemaByDescriptor() throws IOException, ClassNotFoundException {\n//\n//        File file = new File(\"./person.desc\");\n//\n//        FileInputStream fis = new FileInputStream(file);\n//\n//        byte[] descriptorBytes = new byte[(int) file.length()];\n//\n//        fis.read(descriptorBytes);\n//\n//        ProtobufRowSerializationSchema ds = new ProtobufRowSerializationSchema(descriptorBytes);\n//\n//        ByteArrayOutputStream bros = new ByteArrayOutputStream();\n//\n//        ObjectOutputStream oos = new ObjectOutputStream(bros);\n//\n//        oos.writeObject(ds);\n//\n//        byte[] b = bros.toByteArray();\n//\n//        ByteArrayInputStream bris = new ByteArrayInputStream(b);\n//\n//        ObjectInputStream ois = new ObjectInputStream(bris);\n//\n//        Object o = ois.readObject();\n//\n//        Assert.assertTrue(true);\n//\n//    }\n//\n//}\n"
  },
  {
    "path": "flink-examples-1.13/src/test/java/flink/examples/sql/_06/calcite/CalciteTest.java",
    "content": "package flink.examples.sql._06.calcite;\n\nimport java.util.List;\n\nimport org.apache.calcite.plan.RelOptUtil;\nimport org.apache.calcite.plan.RelTraitDef;\nimport org.apache.calcite.rel.RelNode;\nimport org.apache.calcite.schema.SchemaPlus;\nimport org.apache.calcite.sql.parser.SqlParser;\nimport org.apache.calcite.tools.FrameworkConfig;\nimport org.apache.calcite.tools.Frameworks;\nimport org.apache.calcite.tools.Programs;\nimport org.apache.calcite.tools.RelBuilder;\n\npublic class CalciteTest {\n\n    public static void main(String[] args) {\n        final FrameworkConfig config = config().build();\n        final RelBuilder builder = RelBuilder.create(config);\n        final RelNode node = builder\n                .scan(\"EMP\")\n                .build();\n        System.out.println(RelOptUtil.toString(node));\n    }\n\n    public static Frameworks.ConfigBuilder config() {\n        final SchemaPlus rootSchema = Frameworks.createRootSchema(true);\n        return Frameworks.newConfigBuilder()\n                .parserConfig(SqlParser.Config.DEFAULT)\n                .traitDefs((List<RelTraitDef>) null)\n                .programs(Programs.heuristicJoinOrder(Programs.RULE_SET, true, 2));\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/test/java/flink/examples/sql/_07/query/_06_joins/JaninoCompileTest.java",
    "content": "package flink.examples.sql._07.query._06_joins;\n\nimport org.apache.flink.api.common.functions.RichFlatMapFunction;\nimport org.apache.flink.table.runtime.collector.TableFunctionCollector;\n\nimport flink.core.source.JaninoUtils;\n\npublic class JaninoCompileTest {\n\n    public static void main(String[] args) throws Exception {\n        String s = \"import java.util.List;\\n\"\n                + \"\\n\"\n                + \"public class BatchJoinTableFuncCollector$8 extends org.apache.flink.table.runtime.collector\"\n                + \".TableFunctionCollector {\\n\"\n                + \"\\n\"\n                + \"    org.apache.flink.table.data.GenericRowData out = new org.apache.flink.table.data\"\n                + \".GenericRowData(2);\\n\"\n                + \"    org.apache.flink.table.data.utils.JoinedRowData joinedRow$7 = new org.apache.flink.table.data\"\n                + \".utils.JoinedRowData();\\n\"\n                + \"\\n\"\n                + \"    public BatchJoinTableFuncCollector$8(Object[] references) throws Exception {\\n\"\n                + \"\\n\"\n                + \"    }\\n\"\n                + \"\\n\"\n                + \"    @Override\\n\"\n                + \"    public void open(org.apache.flink.configuration.Configuration parameters) throws Exception {\\n\"\n                + \"\\n\"\n                + \"    }\\n\"\n                + \"\\n\"\n                + \"    @Override\\n\"\n                + \"    public void collect(Object record) throws Exception {\\n\"\n                + \"        List<org.apache.flink.table.data.RowData> l = (List<org.apache.flink.table.data.RowData>) \"\n                + \"getInput();\\n\"\n                + \"        List<org.apache.flink.table.data.RowData> r = (List<org.apache.flink.table.data.RowData>) \"\n                + \"record;\\n\"\n                + \"\\n\"\n                + \"        for (int i = 0; i < l.size(); i++) {\\n\"\n                + \"\\n\"\n                + \"            org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) l.get(i);\\n\"\n                + \"            org.apache.flink.table.data.RowData in2 = (org.apache.flink.table.data.RowData) r.get(i);\\n\"\n                + \"\\n\"\n                + \"            org.apache.flink.table.data.binary.BinaryStringData field$5;\\n\"\n                + \"            boolean isNull$5;\\n\"\n                + \"            long field$6;\\n\"\n                + \"            boolean isNull$6;\\n\"\n                + \"            isNull$6 = in2.isNullAt(1);\\n\"\n                + \"            field$6 = -1L;\\n\"\n                + \"            if (!isNull$6) {\\n\"\n                + \"                field$6 = in2.getLong(1);\\n\"\n                + \"            }\\n\"\n                + \"            isNull$5 = in2.isNullAt(0);\\n\"\n                + \"            field$5 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\\n\"\n                + \"            if (!isNull$5) {\\n\"\n                + \"                field$5 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.getString(0))\"\n                + \";\\n\"\n                + \"            }\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"            if (isNull$5) {\\n\"\n                + \"                out.setField(0, null);\\n\"\n                + \"            } else {\\n\"\n                + \"                out.setField(0, field$5);\\n\"\n                + \"            }\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"            if (isNull$6) {\\n\"\n                + \"                out.setField(1, null);\\n\"\n                + \"            } else {\\n\"\n                + \"                out.setField(1, field$6);\\n\"\n                + \"            }\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"            joinedRow$7.replace(in1, out);\\n\"\n                + \"            joinedRow$7.setRowKind(in1.getRowKind());\\n\"\n                + \"            outputResult(joinedRow$7);\\n\"\n                + \"        }\\n\"\n                + \"\\n\"\n                + \"    }\\n\"\n                + \"\\n\"\n                + \"    @Override\\n\"\n                + \"    public void close() throws Exception {\\n\"\n                + \"\\n\"\n                + \"    }\\n\"\n                + \"}\";\n\n        Class<TableFunctionCollector> c = JaninoUtils.genClass(\"BatchJoinTableFuncCollector$8\", s, TableFunctionCollector.class);\n\n        System.out.println(1);\n\n\n        String s2 = \"\\n\"\n                + \"      public class JoinTableFuncCollector$8 extends org.apache.flink.table.runtime.collector\"\n                + \".TableFunctionCollector {\\n\"\n                + \"\\n\"\n                + \"        org.apache.flink.table.data.GenericRowData out = new org.apache.flink.table.data\"\n                + \".GenericRowData(2);\\n\"\n                + \"org.apache.flink.table.data.utils.JoinedRowData joinedRow$7 = new org.apache.flink.table.data\"\n                + \".utils.JoinedRowData();\\n\"\n                + \"\\n\"\n                + \"        public JoinTableFuncCollector$8(Object[] references) throws Exception {\\n\"\n                + \"          \\n\"\n                + \"        }\\n\"\n                + \"\\n\"\n                + \"        @Override\\n\"\n                + \"        public void open(org.apache.flink.configuration.Configuration parameters) throws Exception\"\n                + \" {\\n\"\n                + \"          \\n\"\n                + \"        }\\n\"\n                + \"\\n\"\n                + \"        @Override\\n\"\n                + \"        public void collect(Object record) throws Exception {\\n\"\n                + \"          org.apache.flink.table.data.RowData in1 = (org.apache.flink.table.data.RowData) getInput\"\n                + \"();\\n\"\n                + \"          org.apache.flink.table.data.RowData in2 = (org.apache.flink.table.data.RowData) record;\\n\"\n                + \"          org.apache.flink.table.data.binary.BinaryStringData field$5;\\n\"\n                + \"boolean isNull$5;\\n\"\n                + \"long field$6;\\n\"\n                + \"boolean isNull$6;\\n\"\n                + \"          isNull$6 = in2.isNullAt(1);\\n\"\n                + \"field$6 = -1L;\\n\"\n                + \"if (!isNull$6) {\\n\"\n                + \"  field$6 = in2.getLong(1);\\n\"\n                + \"}\\n\"\n                + \"isNull$5 = in2.isNullAt(0);\\n\"\n                + \"field$5 = org.apache.flink.table.data.binary.BinaryStringData.EMPTY_UTF8;\\n\"\n                + \"if (!isNull$5) {\\n\"\n                + \"  field$5 = ((org.apache.flink.table.data.binary.BinaryStringData) in2.getString(0));\\n\"\n                + \"}\\n\"\n                + \"          \\n\"\n                + \"          \\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"if (isNull$5) {\\n\"\n                + \"  out.setField(0, null);\\n\"\n                + \"} else {\\n\"\n                + \"  out.setField(0, field$5);\\n\"\n                + \"}\\n\"\n                + \"          \\n\"\n                + \"\\n\"\n                + \"\\n\"\n                + \"if (isNull$6) {\\n\"\n                + \"  out.setField(1, null);\\n\"\n                + \"} else {\\n\"\n                + \"  out.setField(1, field$6);\\n\"\n                + \"}\\n\"\n                + \"          \\n\"\n                + \"        \\n\"\n                + \"joinedRow$7.replace(in1, out);\\n\"\n                + \"joinedRow$7.setRowKind(in1.getRowKind());\\n\"\n                + \"outputResult(joinedRow$7);\\n\"\n                + \"      \\n\"\n                + \"        }\\n\"\n                + \"\\n\"\n                + \"        @Override\\n\"\n                + \"        public void close() throws Exception {\\n\"\n                + \"          \\n\"\n                + \"        }\\n\"\n                + \"      }\\n\"\n                + \"    \";\n\n        Class<TableFunctionCollector> c1 = JaninoUtils.genClass(\"JoinTableFuncCollector$8\", s2, TableFunctionCollector.class);\n\n        System.out.println(1);\n\n        String s3 = \"/* 1 */\\n\"\n                + \"/* 2 */      import java.util.LinkedList;\\n\"\n                + \"/* 3 */      import java.util.List;\\n\"\n                + \"/* 4 */      public class LookupFunction$4\\n\"\n                + \"        /* 5 */          extends org.apache.flink.api.common.functions.RichFlatMapFunction {\\n\"\n                + \"    /* 6 */\\n\"\n                + \"    /* 7 */        private transient flink.examples.sql._03.source_sink.table.redis.v2.source\"\n                + \".RedisRowDataLookupFunction \"\n                +\n                \"function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc;\\n\"\n                + \"    /* 8 */        private TableFunctionResultConverterCollector$2 resultConverterCollector$3 = \"\n                + \"null;\\n\"\n                + \"    /* 9 */\\n\"\n                + \"    /* 10 */        public LookupFunction$4(Object[] references) throws Exception {\\n\"\n                + \"        /* 11 */          \"\n                +\n                \"function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc = (((flink.examples.sql._03.source_sink.table.redis.v2.source.RedisRowDataLookupFunction) references[0]));\\n\"\n                + \"        /* 12 */        }\\n\"\n                + \"    /* 13 */\\n\"\n                + \"    /* 14 */\\n\"\n                + \"    /* 15 */\\n\"\n                + \"    /* 16 */        @Override\\n\"\n                + \"    /* 17 */        public void open(org.apache.flink.configuration.Configuration parameters) \"\n                + \"throws Exception {\\n\"\n                + \"        /* 18 */\\n\"\n                + \"        /* 19 */          \"\n                +\n                \"function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.open(new org.apache.flink.table.functions.FunctionContext(getRuntimeContext()));\\n\"\n                + \"        /* 20 */\\n\"\n                + \"        /* 21 */\\n\"\n                + \"        /* 22 */          resultConverterCollector$3 = new TableFunctionResultConverterCollector$2\"\n                + \"();\\n\"\n                + \"        /* 23 */          resultConverterCollector$3.setRuntimeContext(getRuntimeContext());\\n\"\n                + \"        /* 24 */          resultConverterCollector$3.open(new org.apache.flink.configuration\"\n                + \".Configuration());\\n\"\n                + \"        /* 25 */\\n\"\n                + \"        /* 26 */\\n\"\n                + \"        /* 27 */          \"\n                +\n                \"function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.setCollector(resultConverterCollector$3);\\n\"\n                + \"        /* 28 */\\n\"\n                + \"        /* 29 */        }\\n\"\n                + \"    /* 30 */\\n\"\n                + \"    /* 31 */        @Override\\n\"\n                + \"    /* 32 */        public void flatMap(Object _in1, org.apache.flink.util.Collector c) throws \"\n                + \"Exception {\\n\"\n                + \"        /* 33 */          List<org.apache.flink.table.data.RowData> l = (List<org.apache.flink\"\n                + \".table.data.RowData>) _in1;\\n\"\n                + \"        /* 34 */          List<org.apache.flink.table.data.binary.BinaryStringData> list = new \"\n                + \"LinkedList<org.apache.flink.table.data.RowData>();\\n\"\n                + \"        /* 35 */          for (int i = 0; i < l.size(); i++) {\\n\"\n                + \"            /* 36 */\\n\"\n                + \"            /* 37 */              org.apache.flink.table.data.RowData in1 = (org.apache.flink\"\n                + \".table.data.RowData) l.get(i);\\n\"\n                + \"            /* 38 */\\n\"\n                + \"            /* 39 */\\n\"\n                + \"            /* 40 */              org.apache.flink.table.data.binary.BinaryStringData field$0;\\n\"\n                + \"            /* 41 */              boolean isNull$0;\\n\"\n                + \"            /* 42 */\\n\"\n                + \"            /* 43 */              isNull$0 = in1.isNullAt(2);\\n\"\n                + \"            /* 44 */              field$0 = org.apache.flink.table.data.binary.BinaryStringData\"\n                + \".EMPTY_UTF8;\\n\"\n                + \"            /* 45 */              if (!isNull$0) {\\n\"\n                + \"                /* 46 */                field$0 = ((org.apache.flink.table.data.binary\"\n                + \".BinaryStringData) in1.getString(2));\\n\"\n                + \"                /* 47 */              }\\n\"\n                + \"            /* 48 */\\n\"\n                + \"            /* 49 */              list.add(field$0);\\n\"\n                + \"            /* 50 */          }\\n\"\n                + \"        /* 51 */\\n\"\n                + \"        /* 52 */\\n\"\n                + \"        /* 53 */          resultConverterCollector$3.setCollector(c);\\n\"\n                + \"        /* 54 */\\n\"\n                + \"        /* 55 */\\n\"\n                + \"        /* 56 */          \"\n                +\n                \"function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.eval((List<org.apache.flink.table.data.binary.BinaryStringData>) list);\\n\"\n                + \"        /* 57 */\\n\"\n                + \"        /* 58 */\\n\"\n                + \"        /* 59 */        }\\n\"\n                + \"    /* 60 */\\n\"\n                + \"    /* 61 */        @Override\\n\"\n                + \"    /* 62 */        public void close() throws Exception {\\n\"\n                + \"        /* 63 */\\n\"\n                + \"        /* 64 */          \"\n                +\n                \"function_flink$examples$sql$_03$source_sink$table$redis$v2$source$RedisRowDataLookupFunction$9a02959d27765bacc6e3b2107f2d01bc.close();\\n\"\n                + \"        /* 65 */\\n\"\n                + \"        /* 66 */        }\\n\"\n                + \"    /* 67 */\\n\"\n                + \"    /* 68 */\\n\"\n                + \"    /* 69 */              public class TableFunctionResultConverterCollector$2 extends org.apache\"\n                + \".flink.table.runtime.collector.WrappingCollector {\\n\"\n                + \"        /* 70 */\\n\"\n                + \"        /* 71 */\\n\"\n                + \"        /* 72 */\\n\"\n                + \"        /* 73 */                public TableFunctionResultConverterCollector$2() throws Exception \"\n                + \"{\\n\"\n                + \"            /* 74 */\\n\"\n                + \"            /* 75 */                }\\n\"\n                + \"        /* 76 */\\n\"\n                + \"        /* 77 */                @Override\\n\"\n                + \"        /* 78 */                public void open(org.apache.flink.configuration.Configuration \"\n                + \"parameters) throws Exception {\\n\"\n                + \"            /* 79 */\\n\"\n                + \"            /* 80 */                }\\n\"\n                + \"        /* 81 */\\n\"\n                + \"        /* 82 */                @Override\\n\"\n                + \"        /* 83 */                public void collect(Object record) throws Exception {\\n\"\n                + \"            /* 84 */                  List<org.apache.flink.table.data.RowData> externalResult$1 =\"\n                + \" (List<org.apache.flink.table.data.RowData>) record;\\n\"\n                + \"            /* 85 */\\n\"\n                + \"            /* 86 */\\n\"\n                + \"            /* 87 */\\n\"\n                + \"            /* 88 */\\n\"\n                + \"            /* 89 */                  if (externalResult$1 != null) {\\n\"\n                + \"                /* 90 */                    outputResult(externalResult$1);\\n\"\n                + \"                /* 91 */                  }\\n\"\n                + \"            /* 92 */\\n\"\n                + \"            /* 93 */                }\\n\"\n                + \"        /* 94 */\\n\"\n                + \"        /* 95 */                @Override\\n\"\n                + \"        /* 96 */                public void close() {\\n\"\n                + \"            /* 97 */                  try {\\n\"\n                + \"                /* 98 */\\n\"\n                + \"                /* 99 */                  } catch (Exception e) {\\n\"\n                + \"                /* 100 */                    throw new RuntimeException(e);\\n\"\n                + \"                /* 101 */                  }\\n\"\n                + \"            /* 102 */                }\\n\"\n                + \"        /* 103 */              }\\n\"\n                + \"    /* 104 */\\n\"\n                + \"    /* 105 */      }\\n\"\n                + \"/* 106 */    \";\n\n        Class<RichFlatMapFunction> c3 = JaninoUtils.genClass(\"LookupFunction$4\", s3, RichFlatMapFunction.class);\n\n        System.out.println(1);\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.13/src/test/proto/person.proto",
    "content": "syntax = \"proto3\";\n\npackage flink;\n\noption java_package = \"flink.examples.sql._05.format.formats.protobuf\";\noption java_outer_classname = \"PersonOuterClassname\";\noption java_multiple_files = true;\n\nmessage Person {\n    string name = 1;\n    repeated string names = 2;\n\n    int32 id = 3;\n    repeated int32 ids = 4;\n\n    int64 long = 5;\n    repeated int64 longs = 6;\n\n    map<string, int32> si_map = 7;\n    map<string, int64> sl_map = 8;\n    map<string, Dog> sd_map = 9;\n\n    Dog dog = 10;\n    repeated Dog dogs = 11;\n\n    enum ContactType {\n        MOBILE = 0;\n        MESSAGE = 1;\n        WECHAT = 2;\n        EMAIL = 3;\n    }\n\n    message Contact {\n        string number = 1;\n        ContactType contact_type = 2;\n    }\n\n    repeated Contact contacts = 12;\n}\n\nmessage Dog {\n    string name = 1;\n    int32 id = 2;\n}"
  },
  {
    "path": "flink-examples-1.13/src/test/scala/ScalaEnv.scala",
    "content": "import org.apache.flink.api.java.tuple.Tuple3\nimport org.apache.flink.api.scala._\nimport org.apache.flink.streaming.api.scala.StreamExecutionEnvironment\nimport org.apache.flink.table.api.bridge.scala.StreamTableEnvironment\nimport org.apache.flink.table.api.{DataTypes, Schema}\nimport org.apache.flink.types.Row\n\n// https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/udfs.html\n\n/**\n * https://blog.csdn.net/fct2001140269/article/details/84066274\n *\n * https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/\n *\n * https://blog.csdn.net/qq_35338741/article/details/108645832\n */\n\nobject ScalaEnv {\n\n  def main(args: Array[String]): Unit = {\n    val env = StreamExecutionEnvironment.getExecutionEnvironment\n\n    // create a TableEnvironment\n    val tableEnv = StreamTableEnvironment.create(env)\n\n    val source = env.fromCollection(scala.Iterator.apply(Tuple3.of(new String(\"2\"), 1L, 1627218000000L), Tuple3.of(new String(\"2\"), 101L, 1627218000000L + 6000L), Tuple3.of(new String(\"2\"), 201L, 1627218000000L + 7000L), Tuple3.of(new String(\"2\"), 301L, 1627218000000L + 7000L)))\n\n    tableEnv.createTemporaryView(\"source_db.source_table\"\n      , source\n      , Schema\n        .newBuilder()\n        .column(\"f0\", DataTypes.STRING())\n        .column(\"f1\", DataTypes.BIGINT())\n        .column(\"f2\", DataTypes.BIGINT())\n        .build())\n\n    tableEnv.createFunction(\"hashCode\"\n      , classOf[TableFunc0])\n\n    val sql = \"select * from source_db.source_table as a LEFT JOIN LATERAL TABLE(table1(a.f1)) AS DIM(status_new) ON TRUE\"\n\n    tableEnv.toDataStream(tableEnv.sqlQuery(sql), classOf[Row]).print()\n\n    // execute\n    env.execute()\n  }\n\n}"
  },
  {
    "path": "flink-examples-1.13/src/test/scala/TableFunc0.scala",
    "content": "import org.apache.flink.table.functions.TableFunction\n\n\ncase class SimpleUser(name: String, age: Int)\n\nclass TableFunc0 extends TableFunction[SimpleUser] {\n\n  // make sure input element's format is \"<string&gt#<int>\"\n\n  def eval(user: Long): Unit = {\n\n//    if (user.contains(\"#\")) {\n//\n//      val splits = user.split(\"#\")\n//\n//      collect(SimpleUser(splits(0), splits(1).toInt))\n//\n//    }\n  }\n\n}"
  },
  {
    "path": "flink-examples-1.14/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <parent>\n        <artifactId>flink-study</artifactId>\n        <groupId>com.github.antigeneral</groupId>\n        <version>1.0-SNAPSHOT</version>\n    </parent>\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>com.github.antigeneral</groupId>\n    <artifactId>flink-examples-1.14</artifactId>\n\n    <properties>\n        <flink.version>1.14.0</flink.version>\n    </properties>\n\n<!--    <build>-->\n\n<!--        <extensions>-->\n<!--            <extension>-->\n<!--                <groupId>kr.motd.maven</groupId>-->\n<!--                <artifactId>os-maven-plugin</artifactId>-->\n<!--                <version>${os-maven-plugin.version}</version>-->\n<!--            </extension>-->\n<!--        </extensions>-->\n\n<!--        <plugins>-->\n\n\n<!--            <plugin>-->\n<!--                <groupId>org.apache.maven.plugins</groupId>-->\n<!--                <artifactId>maven-compiler-plugin</artifactId>-->\n<!--            </plugin>-->\n\n<!--            <plugin>-->\n<!--                <groupId>org.xolstice.maven.plugins</groupId>-->\n<!--                <artifactId>protobuf-maven-plugin</artifactId>-->\n<!--            </plugin>-->\n\n<!--            &lt;!&ndash;            <plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;                &lt;!&ndash; Extract parser grammar template from calcite-core.jar and put&ndash;&gt;-->\n<!--            &lt;!&ndash;                     it under ${project.build.directory} where all freemarker templates are. &ndash;&gt;&ndash;&gt;-->\n<!--            &lt;!&ndash;                <groupId>org.apache.maven.plugins</groupId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <artifactId>maven-dependency-plugin</artifactId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <executions>&ndash;&gt;-->\n<!--            &lt;!&ndash;                    <execution>&ndash;&gt;-->\n<!--            &lt;!&ndash;                        <id>unpack-parser-template</id>&ndash;&gt;-->\n<!--            &lt;!&ndash;                        <phase>initialize</phase>&ndash;&gt;-->\n<!--            &lt;!&ndash;                        <goals>&ndash;&gt;-->\n<!--            &lt;!&ndash;                            <goal>unpack</goal>&ndash;&gt;-->\n<!--            &lt;!&ndash;                        </goals>&ndash;&gt;-->\n<!--            &lt;!&ndash;                        <configuration>&ndash;&gt;-->\n<!--            &lt;!&ndash;                            <artifactItems>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                <artifactItem>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                    <groupId>org.apache.calcite</groupId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                    <artifactId>calcite-core</artifactId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                    <type>jar</type>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                    <overWrite>true</overWrite>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                    <outputDirectory>${project.build.directory}/</outputDirectory>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                    <includes>**/Parser.jj</includes>&ndash;&gt;-->\n<!--            &lt;!&ndash;                                </artifactItem>&ndash;&gt;-->\n<!--            &lt;!&ndash;                            </artifactItems>&ndash;&gt;-->\n<!--            &lt;!&ndash;                        </configuration>&ndash;&gt;-->\n<!--            &lt;!&ndash;                    </execution>&ndash;&gt;-->\n<!--            &lt;!&ndash;                </executions>&ndash;&gt;-->\n<!--            &lt;!&ndash;            </plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;            &lt;!&ndash; adding fmpp code gen &ndash;&gt;&ndash;&gt;-->\n<!--            &lt;!&ndash;            <plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <artifactId>maven-resources-plugin</artifactId>&ndash;&gt;-->\n<!--            &lt;!&ndash;            </plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;            <plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <groupId>com.googlecode.fmpp-maven-plugin</groupId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <artifactId>fmpp-maven-plugin</artifactId>&ndash;&gt;-->\n<!--            &lt;!&ndash;            </plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;            <plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;                &lt;!&ndash; This must be run AFTER the fmpp-maven-plugin &ndash;&gt;&ndash;&gt;-->\n<!--            &lt;!&ndash;                <groupId>org.codehaus.mojo</groupId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <artifactId>javacc-maven-plugin</artifactId>&ndash;&gt;-->\n<!--            &lt;!&ndash;            </plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;            <plugin>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <groupId>org.apache.maven.plugins</groupId>&ndash;&gt;-->\n<!--            &lt;!&ndash;                <artifactId>maven-surefire-plugin</artifactId>&ndash;&gt;-->\n<!--            &lt;!&ndash;            </plugin>&ndash;&gt;-->\n<!--        </plugins>-->\n<!--    </build>-->\n\n\n<!--    <dependencies>-->\n\n\n<!--        <dependency>-->\n<!--            <groupId>com.google.protobuf</groupId>-->\n<!--            <artifactId>protobuf-java</artifactId>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-connector-hive_2.11</artifactId>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.hadoop</groupId>-->\n<!--            <artifactId>hadoop-common</artifactId>-->\n<!--            <version>3.1.0</version>-->\n<!--            <scope>compile</scope>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>slf4j-log4j12</artifactId>-->\n<!--                    <groupId>org.slf4j</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>commons-logging</artifactId>-->\n<!--                    <groupId>commmons-logging</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>servlet-api</artifactId>-->\n<!--                    <groupId>javax.servlet</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--            <optional>true</optional>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.hive</groupId>-->\n<!--            <artifactId>hive-exec</artifactId>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>log4j-slf4j-impl</artifactId>-->\n<!--                    <groupId>org.apache.logging.log4j</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--                &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--                &lt;!&ndash;                    <artifactId>hadoop-common</artifactId>&ndash;&gt;-->\n<!--                &lt;!&ndash;                    <groupId>org.apache.hadoop</groupId>&ndash;&gt;-->\n<!--                &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n<!--        &lt;!&ndash;        <dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <groupId>org.apache.hadoop</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <artifactId>hadoop-common</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <version>${hadoop.version}</version>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <exclusions>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>slf4j-log4j12</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>org.slf4j</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jsr311-api</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>javax.ws.rs</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jersey-core</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.sun.jersey</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jersey-server</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.sun.jersey</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jersey-servlet</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.sun.jersey</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jersey-json</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.sun.jersey</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;            </exclusions>&ndash;&gt;-->\n<!--        &lt;!&ndash;        </dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;        <dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <groupId>org.apache.hadoop</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <artifactId>hadoop-client</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <version>${hadoop.version}</version>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <exclusions>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>guava</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.google.guava</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>hadoop-common</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>org.apache.hadoop</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;            </exclusions>&ndash;&gt;-->\n<!--        &lt;!&ndash;        </dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;        <dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <groupId>org.apache.hadoop</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <artifactId>hadoop-hdfs</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <version>${hadoop.version}</version>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <exclusions>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jsr311-api</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>javax.ws.rs</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jersey-core</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.sun.jersey</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>jersey-server</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.sun.jersey</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                <exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <artifactId>guava</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                    <groupId>com.google.guava</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;                </exclusion>&ndash;&gt;-->\n<!--        &lt;!&ndash;            </exclusions>&ndash;&gt;-->\n<!--        &lt;!&ndash;        </dependency>&ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.hadoop</groupId>-->\n<!--            <artifactId>hadoop-mapreduce-client-core</artifactId>-->\n<!--            <version>3.1.0</version>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>slf4j-log4j12</artifactId>-->\n<!--                    <groupId>org.slf4j</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-client</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-server</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-servlet</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-core</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>jersey-json</artifactId>-->\n<!--                    <groupId>com.sun.jersey</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/net.mguenther.kafka/kafka-junit &ndash;&gt;-->\n<!--        &lt;!&ndash;        <dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <groupId>net.mguenther.kafka</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <artifactId>kafka-junit</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;        </dependency>&ndash;&gt;-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.scala-lang/scala-library &ndash;&gt;-->\n<!--        &lt;!&ndash;        <dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <groupId>org.scala-lang</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <artifactId>scala-library</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;        </dependency>&ndash;&gt;-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.twitter</groupId>-->\n<!--            <artifactId>chill-protobuf</artifactId>-->\n<!--            &lt;!&ndash; exclusions for dependency conversion &ndash;&gt;-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <groupId>com.esotericsoftware.kryo</groupId>-->\n<!--                    <artifactId>kryo</artifactId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash;        <dependency>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <groupId>org.apache.kafka</groupId>&ndash;&gt;-->\n<!--        &lt;!&ndash;            <artifactId>kafka_2.13</artifactId>&ndash;&gt;-->\n<!--        &lt;!&ndash;        </dependency>&ndash;&gt;-->\n\n\n<!--        <dependency>-->\n<!--            <groupId>junit</groupId>-->\n<!--            <artifactId>junit</artifactId>-->\n<!--            <scope>test</scope>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>net.java.dev.javacc</groupId>-->\n<!--            <artifactId>javacc</artifactId>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.httpcomponents</groupId>-->\n<!--            <artifactId>httpclient</artifactId>-->\n<!--            <version>4.5.10</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-statebackend-rocksdb_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>joda-time</groupId>-->\n<!--            <artifactId>joda-time</artifactId>-->\n<!--            &lt;!&ndash; managed version &ndash;&gt;-->\n<!--            <scope>provided</scope>-->\n<!--            &lt;!&ndash; Avro records can contain JodaTime fields when using logical fields.-->\n<!--                In order to handle them, we need to add an optional dependency.-->\n<!--                Users with those Avro records need to add this dependency themselves. &ndash;&gt;-->\n<!--            <optional>true</optional>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.github.rholder/guava-retrying &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.github.rholder</groupId>-->\n<!--            <artifactId>guava-retrying</artifactId>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.projectlombok</groupId>-->\n<!--            <artifactId>lombok</artifactId>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-java</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-streaming-java_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>flink-shaded-zookeeper-3</artifactId>-->\n<!--                    <groupId>org.apache.flink</groupId>-->\n<!--                </exclusion>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>flink-shaded-guava</artifactId>-->\n<!--                    <groupId>org.apache.flink</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-clients_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.mvel/mvel2 &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.mvel</groupId>-->\n<!--            <artifactId>mvel2</artifactId>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/redis.clients/jedis &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>redis.clients</groupId>-->\n<!--            <artifactId>jedis</artifactId>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; 对zookeeper的底层api的一些封装 &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.curator</groupId>-->\n<!--            <artifactId>curator-framework</artifactId>-->\n<!--        </dependency>-->\n<!--        &lt;!&ndash; 封装了一些高级特性，如：Cache事件监听、选举、分布式锁、分布式Barrier &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.curator</groupId>-->\n<!--            <artifactId>curator-recipes</artifactId>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.kafka</groupId>-->\n<!--            <artifactId>kafka-clients</artifactId>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-ant</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-cli-commons</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-cli-picocli</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-console</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-datetime</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-docgenerator</artifactId>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-groovydoc</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-groovysh</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-jmx</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-json</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-jsr223</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-macro</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-nio</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-servlet</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-sql</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-swing</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-templates</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-test</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-test-junit5</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-testng</artifactId>-->\n\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.codehaus.groovy</groupId>-->\n<!--            <artifactId>groovy-xml</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-planner_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.google.code.gson</groupId>-->\n<!--            <artifactId>gson</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-common</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-api-java</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-api-java-bridge_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-table-planner_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <scope>compile</scope>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-jdbc &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-connector-jdbc_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-connector-hbase-2.2_2.11</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>hbase-shaded-miscellaneous</artifactId>-->\n<!--                    <groupId>org.apache.hbase.thirdparty</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-json</artifactId>-->\n<!--            <version>${flink.version}</version>-->\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.bahir</groupId>-->\n<!--            <artifactId>flink-connector-redis_2.10</artifactId>-->\n<!--            <version>1.0</version>-->\n<!--        </dependency>-->\n\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.flink</groupId>-->\n<!--            <artifactId>flink-connector-kafka_2.12</artifactId>-->\n\n<!--        </dependency>-->\n\n\n<!--        <dependency>-->\n<!--            <groupId>ch.qos.logback</groupId>-->\n<!--            <artifactId>logback-classic</artifactId>-->\n<!--            <scope>compile</scope>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.slf4j</groupId>-->\n<!--            <artifactId>slf4j-log4j12</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n\n<!--            <groupId>org.apache.flink</groupId>-->\n\n<!--            <artifactId>flink-runtime-web_2.11</artifactId>-->\n\n<!--            <version>${flink.version}</version>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.core</groupId>-->\n<!--            <artifactId>jackson-databind</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.core</groupId>-->\n<!--            <artifactId>jackson-core</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.core</groupId>-->\n<!--            <artifactId>jackson-annotations</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-kotlin &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.module</groupId>-->\n<!--            <artifactId>jackson-module-kotlin</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-parameter-names &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.module</groupId>-->\n<!--            <artifactId>jackson-module-parameter-names</artifactId>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-guava &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.fasterxml.jackson.datatype</groupId>-->\n<!--            <artifactId>jackson-datatype-guava</artifactId>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n\n<!--        </dependency>-->\n\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.hubspot.jackson/jackson-datatype-protobuf &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>com.hubspot.jackson</groupId>-->\n<!--            <artifactId>jackson-datatype-protobuf</artifactId>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n\n<!--        </dependency>-->\n\n<!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.calcite/calcite-core &ndash;&gt;-->\n<!--        <dependency>-->\n<!--            <groupId>org.apache.calcite</groupId>-->\n<!--            <artifactId>calcite-core</artifactId>-->\n<!--            <exclusions>-->\n<!--                <exclusion>-->\n<!--                    <artifactId>guava</artifactId>-->\n<!--                    <groupId>com.google.guava</groupId>-->\n<!--                </exclusion>-->\n<!--            </exclusions>-->\n\n<!--        </dependency>-->\n\n<!--        <dependency>-->\n<!--            <groupId>com.google.guava</groupId>-->\n<!--            <artifactId>guava</artifactId>-->\n<!--        </dependency>-->\n\n\n<!--    </dependencies>-->\n\n\n</project>"
  },
  {
    "path": "flink-examples-1.14/src/main/java/flink/examples/sql/_08/batch/HiveModuleV2.java",
    "content": "package flink.examples.sql._08.batch;\n\nimport static org.apache.flink.util.Preconditions.checkArgument;\n\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.Set;\n\nimport org.apache.flink.annotation.VisibleForTesting;\nimport org.apache.flink.table.catalog.hive.client.HiveShim;\nimport org.apache.flink.table.catalog.hive.client.HiveShimLoader;\nimport org.apache.flink.table.catalog.hive.factories.HiveFunctionDefinitionFactory;\nimport org.apache.flink.table.functions.FunctionDefinition;\nimport org.apache.flink.table.module.Module;\nimport org.apache.flink.table.module.hive.udf.generic.GenericUDFLegacyGroupingID;\nimport org.apache.flink.table.module.hive.udf.generic.HiveGenericUDFGrouping;\nimport org.apache.flink.util.StringUtils;\nimport org.apache.hadoop.hive.ql.exec.FunctionInfo;\n\npublic class HiveModuleV2 implements Module {\n\n\n    // a set of functions that shouldn't be overridden by HiveModule\n    @VisibleForTesting\n    static final Set<String> BUILT_IN_FUNC_BLACKLIST =\n            Collections.unmodifiableSet(\n                    new HashSet<>(\n                            Arrays.asList(\n                                    \"count\",\n                                    \"cume_dist\",\n                                    \"current_date\",\n                                    \"current_timestamp\",\n                                    \"dense_rank\",\n                                    \"first_value\",\n                                    \"lag\",\n                                    \"last_value\",\n                                    \"lead\",\n                                    \"ntile\",\n                                    \"rank\",\n                                    \"row_number\",\n                                    \"hop\",\n                                    \"hop_end\",\n                                    \"hop_proctime\",\n                                    \"hop_rowtime\",\n                                    \"hop_start\",\n                                    \"percent_rank\",\n                                    \"session\",\n                                    \"session_end\",\n                                    \"session_proctime\",\n                                    \"session_rowtime\",\n                                    \"session_start\",\n                                    \"tumble\",\n                                    \"tumble_end\",\n                                    \"tumble_proctime\",\n                                    \"tumble_rowtime\",\n                                    \"tumble_start\")));\n\n    private final HiveFunctionDefinitionFactory factory;\n    private final String hiveVersion;\n    private final HiveShim hiveShim;\n    private Set<String> functionNames;\n\n    public HiveModuleV2() {\n        this(HiveShimLoader.getHiveVersion());\n    }\n\n    public HiveModuleV2(String hiveVersion) {\n        checkArgument(\n                !StringUtils.isNullOrWhitespaceOnly(hiveVersion), \"hiveVersion cannot be null\");\n\n        this.hiveVersion = hiveVersion;\n        this.hiveShim = HiveShimLoader.loadHiveShim(hiveVersion);\n        this.factory = new HiveFunctionDefinitionFactory(hiveShim);\n        this.functionNames = new HashSet<>();\n        this.map = new HashMap<>();\n    }\n\n    @Override\n    public Set<String> listFunctions() {\n        // lazy initialize\n        if (functionNames.isEmpty()) {\n            functionNames = hiveShim.listBuiltInFunctions();\n            functionNames.removeAll(BUILT_IN_FUNC_BLACKLIST);\n            functionNames.add(\"grouping\");\n            functionNames.add(GenericUDFLegacyGroupingID.NAME);\n            functionNames.addAll(map.keySet());\n        }\n        return functionNames;\n    }\n\n    @Override\n    public Optional<FunctionDefinition> getFunctionDefinition(String name) {\n        if (BUILT_IN_FUNC_BLACKLIST.contains(name)) {\n            return Optional.empty();\n        }\n        // We override Hive's grouping function. Refer to the implementation for more details.\n        if (name.equalsIgnoreCase(\"grouping\")) {\n            return Optional.of(\n                    factory.createFunctionDefinitionFromHiveFunction(\n                            name, HiveGenericUDFGrouping.class.getName()));\n        }\n\n        // this function is used to generate legacy GROUPING__ID value for old hive versions\n        if (name.equalsIgnoreCase(GenericUDFLegacyGroupingID.NAME)) {\n            return Optional.of(\n                    factory.createFunctionDefinitionFromHiveFunction(\n                            name, GenericUDFLegacyGroupingID.class.getName()));\n        }\n\n        Optional<FunctionInfo> info = hiveShim.getBuiltInFunctionInfo(name);\n\n        if (info.isPresent()) {\n            return info.map(\n                    functionInfo ->\n                            factory.createFunctionDefinitionFromHiveFunction(\n                                    name, functionInfo.getFunctionClass().getName()));\n        } else {\n            return Optional.ofNullable(this.map.get(name))\n                    .map(hiveUDFClassName -> factory.createFunctionDefinitionFromHiveFunction(name, hiveUDFClassName));\n        }\n    }\n\n    public String getHiveVersion() {\n        return hiveVersion;\n    }\n\n    private final Map<String, String> map;\n\n    public void registryHiveUDF(String hiveUDFName, String hiveUDFClassName) {\n        this.map.put(hiveUDFName, hiveUDFClassName);\n    }\n}\n"
  },
  {
    "path": "flink-examples-1.14/src/main/java/flink/examples/sql/_08/batch/Test.java",
    "content": "package flink.examples.sql._08.batch;\n\nimport java.util.concurrent.TimeUnit;\n\nimport org.apache.flink.api.common.RuntimeExecutionMode;\nimport org.apache.flink.api.common.restartstrategy.RestartStrategies;\nimport org.apache.flink.api.common.typeinfo.TypeInformation;\nimport org.apache.flink.api.java.typeutils.ResultTypeQueryable;\nimport org.apache.flink.api.java.typeutils.RowTypeInfo;\nimport org.apache.flink.api.java.utils.ParameterTool;\nimport org.apache.flink.configuration.Configuration;\nimport org.apache.flink.streaming.api.CheckpointingMode;\nimport org.apache.flink.streaming.api.datastream.DataStream;\nimport org.apache.flink.streaming.api.environment.CheckpointConfig;\nimport org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;\nimport org.apache.flink.streaming.api.functions.source.SourceFunction;\nimport org.apache.flink.table.api.SqlDialect;\nimport org.apache.flink.table.api.Table;\nimport org.apache.flink.table.api.bridge.java.StreamTableEnvironment;\nimport org.apache.flink.table.catalog.hive.HiveCatalog;\nimport org.apache.flink.table.module.CoreModule;\nimport org.apache.flink.types.Row;\n\n\n/**\n * hadoop 启动：/usr/local/Cellar/hadoop/3.2.1/sbin/start-all.sh\n * http://localhost:9870/\n * http://localhost:8088/cluster\n * <p>\n * hive 启动：$HIVE_HOME/bin/hive --service metastore &\n * hive cli：$HIVE_HOME/bin/hive\n */\npublic class Test {\n\n    public static void main(String[] args) {\n        StreamExecutionEnvironment env =\n                StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(new Configuration());\n\n        ParameterTool parameterTool = ParameterTool.fromArgs(args);\n\n        env.setRestartStrategy(RestartStrategies.failureRateRestart(6, org.apache.flink.api.common.time.Time\n                .of(10L, TimeUnit.MINUTES), org.apache.flink.api.common.time.Time.of(5L, TimeUnit.SECONDS)));\n        env.getConfig().setGlobalJobParameters(parameterTool);\n        env.setParallelism(1);\n\n        // ck 设置\n        env.getCheckpointConfig().setFailOnCheckpointingErrors(false);\n        env.enableCheckpointing(30 * 1000L, CheckpointingMode.EXACTLY_ONCE);\n        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(3L);\n        env.getCheckpointConfig()\n                .enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);\n\n        env.setRuntimeMode(RuntimeExecutionMode.BATCH);\n        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);\n\n\n        tEnv.getConfig().getConfiguration().setString(\"pipeline.name\", \"1.14.0 Interval Outer Join 事件时间案例\");\n\n\n        String defaultDatabase = \"default\";\n        String hiveConfDir = \"/usr/local/Cellar/hive/3.1.2/libexec/conf\";\n\n        HiveCatalog hive = new HiveCatalog(\"default\", defaultDatabase, hiveConfDir);\n        tEnv.registerCatalog(\"default\", hive);\n\n        tEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);\n\n        // set the HiveCatalog as the current catalog of the session\n        tEnv.useCatalog(\"default\");\n\n        String version = \"3.1.2\";\n        tEnv.unloadModule(\"core\");\n\n        HiveModuleV2 hiveModuleV2 = new HiveModuleV2(version);\n\n        tEnv.loadModule(\"default\", hiveModuleV2);\n        tEnv.loadModule(\"core\", CoreModule.INSTANCE);\n\n        String sql3 = \"\"\n                + \"with tmp as (\"\n                + \"\"\n                + \"select count(1) as part_pv\\n\"\n                + \"         , max(order_amount) as part_max\\n\"\n                + \"         , min(order_amount) as part_min\\n\"\n                + \"    from hive_table\\n\"\n                + \"    where p_date between '20210920' and '20210920'\\n\"\n                + \")\\n\"\n                + \"select * from tmp\";\n\n        Table t = tEnv.sqlQuery(sql3);\n\n        DataStream<Row> r = env.addSource(new UserDefinedSource());\n\n        tEnv.createTemporaryView(\"test\", r);\n\n        tEnv.executeSql(\"select * from test\")\n                .print();\n    }\n\n    private static class UserDefinedSource implements SourceFunction<Row>, ResultTypeQueryable<Row> {\n\n        private volatile boolean isCancel;\n\n        @Override\n        public void run(SourceContext<Row> sourceContext) throws Exception {\n\n            int i = 0;\n\n            while (!this.isCancel) {\n\n                sourceContext.collect(Row.of(\"a\" + i, \"b\", 1L));\n\n                Thread.sleep(10L);\n                i++;\n\n                if (i == 100) {\n                    this.isCancel = true;\n                }\n            }\n\n        }\n\n        @Override\n        public void cancel() {\n            this.isCancel = true;\n        }\n\n        @Override\n        public TypeInformation<Row> getProducedType() {\n            return new RowTypeInfo(new TypeInformation[] {\n                    TypeInformation.of(String.class)\n                    , TypeInformation.of(String.class)\n                    , TypeInformation.of(Long.class)\n            }, new String[] {\"a\", \"b\", \"c\"});\n        }\n    }\n\n}\n"
  },
  {
    "path": "flink-examples-1.8/.gitignore",
    "content": "HELP.md\ntarget/\n!.mvn/wrapper/maven-wrapper.jar\n!**/src/main/**\n#**/src/test/**\n.idea/\n*.iml\n*.DS_Store\n\n### IntelliJ IDEA ###\n.idea\n*.iws\n*.ipr\n\n"
  },
  {
    "path": "flink-examples-1.8/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <parent>\n        <artifactId>flink-study</artifactId>\n        <groupId>com.github.antigeneral</groupId>\n        <version>1.0-SNAPSHOT</version>\n    </parent>\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>com.github.antigeneral</groupId>\n    <artifactId>flink-examples-1.8</artifactId>\n\n    <build>\n\n        <extensions>\n            <extension>\n                <groupId>kr.motd.maven</groupId>\n                <artifactId>os-maven-plugin</artifactId>\n                <version>${os-maven-plugin.version}</version>\n            </extension>\n        </extensions>\n\n        <plugins>\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-compiler-plugin</artifactId>\n                <configuration>\n                    <source>8</source>\n                    <target>8</target>\n                </configuration>\n            </plugin>\n\n            <plugin>\n                <groupId>org.xolstice.maven.plugins</groupId>\n                <artifactId>protobuf-maven-plugin</artifactId>\n                <version>${protobuf-maven-plugin.version}</version>\n                <configuration>\n                    <protoSourceRoot>\n                        src/test/proto\n                    </protoSourceRoot>\n                    <protocArtifact>\n                        com.google.protobuf:protoc:3.1.0:exe:${os.detected.classifier}\n                    </protocArtifact>\n                    <pluginId>grpc-java</pluginId>\n                    <pluginArtifact>\n                        io.grpc:protoc-gen-grpc-java:${grpc-plugin.version}:exe:${os.detected.classifier}\n                    </pluginArtifact>\n                </configuration>\n                <executions>\n                    <execution>\n                        <goals>\n                            <goal>compile</goal>\n                            <goal>compile-custom</goal>\n                        </goals>\n                    </execution>\n                </executions>\n            </plugin>\n\n            <plugin>\n                <groupId>net.alchim31.maven</groupId>\n                <artifactId>scala-maven-plugin</artifactId>\n                <version>3.3.1</version>\n                <executions>\n                    <!-- Run scala compiler in the process-resources phase, so that dependencies on\n                        scala classes can be resolved later in the (Java) compile phase -->\n                    <execution>\n                        <id>scala-compile-first</id>\n                        <phase>process-resources</phase>\n                        <goals>\n                            <goal>compile</goal>\n                        </goals>\n                    </execution>\n\n                    <!-- Run scala compiler in the process-test-resources phase, so that dependencies on\n                         scala classes can be resolved later in the (Java) test-compile phase -->\n                    <execution>\n                        <id>scala-test-compile</id>\n                        <phase>process-test-resources</phase>\n                        <goals>\n                            <goal>testCompile</goal>\n                        </goals>\n                    </execution>\n                </executions>\n                <configuration>\n                    <jvmArgs>\n                        <jvmArg>-Xms128m</jvmArg>\n                        <jvmArg>-Xmx512m</jvmArg>\n                    </jvmArgs>\n                </configuration>\n            </plugin>\n\n            <!-- Adding scala source directories to build path -->\n            <plugin>\n                <groupId>org.codehaus.mojo</groupId>\n                <artifactId>build-helper-maven-plugin</artifactId>\n                <executions>\n                    <!-- Add src/main/scala to eclipse build path -->\n                    <execution>\n                        <id>add-source</id>\n                        <phase>generate-sources</phase>\n                        <goals>\n                            <goal>add-source</goal>\n                        </goals>\n                        <configuration>\n                            <sources>\n                                <source>src/main/scala</source>\n                            </sources>\n                        </configuration>\n                    </execution>\n                    <!-- Add src/test/scala to eclipse build path -->\n                    <execution>\n                        <id>add-test-source</id>\n                        <phase>generate-test-sources</phase>\n                        <goals>\n                            <goal>add-test-source</goal>\n                        </goals>\n                        <configuration>\n                            <sources>\n                                <source>src/test/java</source>\n                                <source>src/test/scala</source>\n                            </sources>\n                        </configuration>\n                    </execution>\n                </executions>\n            </plugin>\n\n            <!-- Eclipse Integration -->\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-eclipse-plugin</artifactId>\n                <version>2.8</version>\n                <configuration>\n                    <downloadSources>true</downloadSources>\n                    <projectnatures>\n                        <projectnature>org.scala-ide.sdt.core.scalanature</projectnature>\n                        <projectnature>org.eclipse.jdt.core.javanature</projectnature>\n                    </projectnatures>\n                    <buildcommands>\n                        <buildcommand>org.scala-ide.sdt.core.scalabuilder</buildcommand>\n                    </buildcommands>\n                    <classpathContainers>\n                        <classpathContainer>org.scala-ide.sdt.launching.SCALA_CONTAINER</classpathContainer>\n                        <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>\n                    </classpathContainers>\n                    <excludes>\n                        <exclude>org.scala-lang:scala-library</exclude>\n                        <exclude>org.scala-lang:scala-compiler</exclude>\n                    </excludes>\n                    <sourceIncludes>\n                        <sourceInclude>**/*.scala</sourceInclude>\n                        <sourceInclude>**/*.java</sourceInclude>\n                    </sourceIncludes>\n                </configuration>\n            </plugin>\n\n        </plugins>\n    </build>\n\n    <properties>\n        <scala.macros.version>2.1.1</scala.macros.version>\n        <scala.version>2.11.12</scala.version>\n        <flink.version>1.8.0</flink.version>\n        <lombok.version>1.18.20</lombok.version>\n        <scala.binary.version>2.11</scala.binary.version>\n        <mvel2.version>2.4.12.Final</mvel2.version>\n        <curator.version>2.12.0</curator.version>\n        <kafka.version>2.1.1</kafka.version>\n        <groovy.version>2.5.7</groovy.version>\n        <gson.version>2.2.4</gson.version>\n        <guava.version>30.1.1-jre</guava.version>\n        <guava.retrying.version>2.0.0</guava.retrying.version>\n        <logback-classic.version>1.2.3</logback-classic.version>\n        <slf4j-log4j12.version>1.8.0-beta2</slf4j-log4j12.version>\n\n        <grpc-plugin.version>1.23.1</grpc-plugin.version>\n        <protobuf-maven-plugin.version>0.6.1</protobuf-maven-plugin.version>\n        <protobuf-java.version>3.11.0</protobuf-java.version>\n\n        <joda-time.version>2.5</joda-time.version>\n\n        <os-maven-plugin.version>1.6.2</os-maven-plugin.version>\n\n        <maven.compiler.source>1.8</maven.compiler.source>\n        <maven.compiler.target>1.8</maven.compiler.target>\n    </properties>\n\n    <dependencies>\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.httpcomponents</groupId>-->\n        <!--            <artifactId>httpclient</artifactId>-->\n        <!--            <version>4.5.10</version>-->\n        <!--            <scope>compile</scope>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>joda-time</groupId>-->\n        <!--            <artifactId>joda-time</artifactId>-->\n        <!--            &lt;!&ndash; managed version &ndash;&gt;-->\n        <!--            <scope>provided</scope>-->\n        <!--            &lt;!&ndash; Avro records can contain JodaTime fields when using logical fields.-->\n        <!--                In order to handle them, we need to add an optional dependency.-->\n        <!--                Users with those Avro records need to add this dependency themselves. &ndash;&gt;-->\n        <!--            <optional>true</optional>-->\n        <!--            <version>${joda-time.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>com.google.protobuf</groupId>-->\n        <!--            <artifactId>protobuf-java</artifactId>-->\n        <!--            <version>${protobuf-java.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        &lt;!&ndash; https://mvnrepository.com/artifact/com.github.rholder/guava-retrying &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>com.github.rholder</groupId>-->\n        <!--            <artifactId>guava-retrying</artifactId>-->\n        <!--            <version>${guava.retrying.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>com.google.guava</groupId>-->\n        <!--            <artifactId>guava</artifactId>-->\n        <!--            <version>${guava.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.projectlombok</groupId>-->\n        <!--            <artifactId>lombok</artifactId>-->\n        <!--            <version>${lombok.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-java</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-streaming-java_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-clients_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.mvel/mvel2 &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.mvel</groupId>-->\n        <!--            <artifactId>mvel2</artifactId>-->\n        <!--            <version>${mvel2.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        &lt;!&ndash; https://mvnrepository.com/artifact/redis.clients/jedis &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>redis.clients</groupId>-->\n        <!--            <artifactId>jedis</artifactId>-->\n        <!--            <version>3.6.3</version>-->\n        <!--        </dependency>-->\n\n        <!--        &lt;!&ndash; 对zookeeper的底层api的一些封装 &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.curator</groupId>-->\n        <!--            <artifactId>curator-framework</artifactId>-->\n        <!--            <version>${curator.version}</version>-->\n        <!--        </dependency>-->\n        <!--        &lt;!&ndash; 封装了一些高级特性，如：Cache事件监听、选举、分布式锁、分布式Barrier &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.curator</groupId>-->\n        <!--            <artifactId>curator-recipes</artifactId>-->\n        <!--            <version>${curator.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.kafka</groupId>-->\n        <!--            <artifactId>kafka-clients</artifactId>-->\n        <!--            <version>${kafka.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-ant</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-cli-commons</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-cli-picocli</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-console</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-datetime</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-docgenerator</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-groovydoc</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-groovysh</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-jmx</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-json</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-jsr223</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-macro</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-nio</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-servlet</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-sql</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-swing</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-templates</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-test</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-test-junit5</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-testng</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.codehaus.groovy</groupId>-->\n        <!--            <artifactId>groovy-xml</artifactId>-->\n        <!--            <version>${groovy.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-table-planner_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>com.google.code.gson</groupId>-->\n        <!--            <artifactId>gson</artifactId>-->\n        <!--            <version>${gson.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-table-common</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--            <scope>compile</scope>-->\n        <!--        </dependency>-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-table-api-java</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--            <scope>compile</scope>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-table-api-scala_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--            <scope>compile</scope>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-streaming-scala_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-streaming-java_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-table-api-java-bridge_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--            <scope>compile</scope>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-table-api-scala-bridge_2.11</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-json</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.bahir</groupId>-->\n        <!--            <artifactId>flink-connector-redis_2.10</artifactId>-->\n        <!--            <version>1.0</version>-->\n        <!--        </dependency>-->\n\n\n\n        <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.apache.flink</groupId>-->\n        <!--            <artifactId>flink-connector-kafka_2.12</artifactId>-->\n        <!--            <version>${flink.version}</version>-->\n        <!--        </dependency>-->\n\n\n        <!--        <dependency>-->\n        <!--            <groupId>ch.qos.logback</groupId>-->\n        <!--            <artifactId>logback-classic</artifactId>-->\n        <!--            <scope>compile</scope>-->\n        <!--            <version>${logback-classic.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        &lt;!&ndash; https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 &ndash;&gt;-->\n        <!--        <dependency>-->\n        <!--            <groupId>org.slf4j</groupId>-->\n        <!--            <artifactId>slf4j-log4j12</artifactId>-->\n        <!--            <version>${slf4j-log4j12.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.scala-lang</groupId>-->\n        <!--            <artifactId>scala-reflect</artifactId>-->\n        <!--            <version>${scala.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.scala-lang</groupId>-->\n        <!--            <artifactId>scala-library</artifactId>-->\n        <!--            <version>${scala.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.scala-lang</groupId>-->\n        <!--            <artifactId>scala-compiler</artifactId>-->\n        <!--            <version>${scala.version}</version>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n        <!--            <groupId>org.scalatest</groupId>-->\n        <!--            <artifactId>scalatest_${scala.binary.version}</artifactId>-->\n        <!--            <version>3.0.0</version>-->\n        <!--            <scope>test</scope>-->\n        <!--        </dependency>-->\n\n        <!--        <dependency>-->\n\n        <!--            <groupId>org.apache.flink</groupId>-->\n\n        <!--            <artifactId>flink-runtime-web_2.11</artifactId>-->\n\n        <!--            <version>${flink.version}</version>-->\n\n        <!--        </dependency>-->\n\n    </dependencies>\n\n\n</project>"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n\n    <groupId>com.github.antigeneral</groupId>\n    <artifactId>flink-study</artifactId>\n    <version>1.0-SNAPSHOT</version>\n    <modules>\n        <module>flink-examples-1.8</module>\n        <module>flink-examples-1.12</module>\n        <module>flink-examples-1.13</module>\n        <module>flink-examples-1.10</module>\n        <module>flink-examples-1.14</module>\n    </modules>\n\n    <packaging>pom</packaging>\n\n    <properties>\n        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n        <maven.compiler.source>1.8</maven.compiler.source>\n        <maven.compiler.target>1.8</maven.compiler.target>\n        <slf4j.version>1.7.25</slf4j.version>\n        <logback.version>1.2.3</logback.version>\n        <zookeeper.version>3.7.0</zookeeper.version>\n        <curator-recipes.version>4.0.1</curator-recipes.version>\n        <flink.version>1.13.5</flink.version>\n        <kafka-client.version>2.1.1</kafka-client.version>\n        <lombok.version>1.18.6</lombok.version>\n        <common-lang3.version>3.6</common-lang3.version>\n        <resilience4j.version>1.1.0</resilience4j.version>\n        <utils.version>0.0.2</utils.version>\n        <hadoop.version>3.2.1</hadoop.version>\n        <hbase.version>2.0.5</hbase.version>\n        <druidry.version>2.14</druidry.version>\n        <avatica-core.version>1.15.0</avatica-core.version>\n        <!--分布式调度框架，依赖Apache Zookeeper-->\n        <elastic-job.version>2.1.5</elastic-job.version>\n        <curator.version>2.10.0</curator.version>\n        <spring-boot.version>2.2.0.RELEASE</spring-boot.version>\n        <lombok.version>1.18.20</lombok.version>\n        <scala.binary.version>2.11</scala.binary.version>\n        <mvel2.version>2.4.12.Final</mvel2.version>\n        <curator.version>2.12.0</curator.version>\n        <kafka.version>2.8.0</kafka.version>\n        <groovy.version>2.5.7</groovy.version>\n        <gson.version>2.2.4</gson.version>\n        <guava.version>30.1.1-jre</guava.version>\n        <guava.retrying.version>2.0.0</guava.retrying.version>\n        <logback-classic.version>1.2.3</logback-classic.version>\n        <slf4j-log4j12.version>1.8.0-beta2</slf4j-log4j12.version>\n\n        <grpc-plugin.version>1.23.1</grpc-plugin.version>\n        <protobuf-maven-plugin.version>0.6.1</protobuf-maven-plugin.version>\n        <protobuf-java.version>3.11.0</protobuf-java.version>\n\n        <joda-time.version>2.5</joda-time.version>\n\n        <os-maven-plugin.version>1.6.2</os-maven-plugin.version>\n        <jackson.version>2.12.4</jackson.version>\n        <jackson-datatype-protobuf.version>0.9.12</jackson-datatype-protobuf.version>\n        <calcite.version>1.27.0</calcite.version>\n        <jedis.version>3.6.3</jedis.version>\n        <javacc.version>7.0.10</javacc.version>\n        <junit.version>4.13.2</junit.version>\n        <hive.version>3.1.2</hive.version>\n        <mysql.version>8.0.17</mysql.version>\n    </properties>\n\n    <dependencyManagement>\n        <dependencies>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-connector-hive_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.hive</groupId>\n                <artifactId>hive-exec</artifactId>\n                <version>${hive.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>com.twitter</groupId>\n                <artifactId>chill-protobuf</artifactId>\n                <version>0.7.6</version>\n                <!-- exclusions for dependency conversion -->\n                <exclusions>\n                    <exclusion>\n                        <groupId>com.esotericsoftware.kryo</groupId>\n                        <artifactId>kryo</artifactId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/net.mguenther.kafka/kafka-junit -->\n            <dependency>\n                <groupId>net.mguenther.kafka</groupId>\n                <artifactId>kafka-junit</artifactId>\n                <version>2.8.0</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.kafka</groupId>\n                <artifactId>kafka_2.13</artifactId>\n                <version>${kafka.version}</version>\n            </dependency>\n\n\n            <!-- https://mvnrepository.com/artifact/junit/junit -->\n            <dependency>\n                <groupId>junit</groupId>\n                <artifactId>junit</artifactId>\n                <version>${junit.version}</version>\n                <scope>test</scope>\n            </dependency>\n\n\n            <!-- https://mvnrepository.com/artifact/net.java.dev.javacc/javacc -->\n            <dependency>\n                <groupId>net.java.dev.javacc</groupId>\n                <artifactId>javacc</artifactId>\n                <version>${javacc.version}</version>\n            </dependency>\n\n\n            <!--分布式调度框架，依赖Apache Zookeeper-->\n\n            <dependency>\n                <artifactId>elastic-job-common-core</artifactId>\n                <groupId>com.dangdang</groupId>\n                <version>${elastic-job.version}</version>\n            </dependency>\n            <dependency>\n                <artifactId>elastic-job-lite-core</artifactId>\n                <groupId>com.dangdang</groupId>\n                <version>${elastic-job.version}</version>\n            </dependency>\n            <dependency>\n                <artifactId>elastic-job-lite-spring</artifactId>\n                <groupId>com.dangdang</groupId>\n                <version>${elastic-job.version}</version>\n            </dependency>\n            <dependency>\n                <artifactId>elastic-job-cloud-executor</artifactId>\n                <groupId>com.dangdang</groupId>\n                <version>${elastic-job.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.curator</groupId>\n                <artifactId>curator-test</artifactId>\n                <version>${curator.version}</version>\n            </dependency>\n\n            <!--spring-->\n\n            <dependency>\n                <groupId>org.springframework</groupId>\n                <artifactId>spring-context</artifactId>\n                <version>${springframework.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.springframework.boot</groupId>\n                <artifactId>spring-boot-starter</artifactId>\n                <version>${spring-boot.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.springframework.boot</groupId>\n                <artifactId>spring-boot-starter-actuator</artifactId>\n                <version>${spring-boot.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.springframework.boot</groupId>\n                <artifactId>spring-boot-starter-web</artifactId>\n                <version>${spring-boot.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.springframework.boot</groupId>\n                <artifactId>spring-boot-configuration-processor</artifactId>\n                <version>${spring-boot.version}</version>\n                <optional>true</optional>\n            </dependency>\n            <dependency>\n                <groupId>org.springframework.boot</groupId>\n                <artifactId>spring-boot-starter-jdbc</artifactId>\n                <version>${spring-boot.version}</version>\n            </dependency>\n\n            <!--slf4j 日志-->\n\n            <dependency>\n                <groupId>org.slf4j</groupId>\n                <artifactId>jcl-over-slf4j</artifactId>\n                <version>${slf4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.slf4j</groupId>\n                <artifactId>log4j-over-slf4j</artifactId>\n                <version>${slf4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.slf4j</groupId>\n                <artifactId>slf4j-api</artifactId>\n                <version>${slf4j.version}</version>\n            </dependency>\n\n            <!-- Apache Druid java客户端ORM -->\n            <dependency>\n                <groupId>in.zapr.druid</groupId>\n                <artifactId>druidry</artifactId>\n                <version>${druidry.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.hbase</groupId>\n                <artifactId>hbase-client</artifactId>\n                <version>${hbase.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.hadoop</groupId>\n                <artifactId>hadoop-common</artifactId>\n                <version>${hadoop.version}</version>\n                <exclusions>\n                    <exclusion>\n                        <artifactId>slf4j-log4j12</artifactId>\n                        <groupId>org.slf4j</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jsr311-api</artifactId>\n                        <groupId>javax.ws.rs</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-core</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-server</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-servlet</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-json</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.hadoop</groupId>\n                <artifactId>hadoop-client</artifactId>\n                <version>${hadoop.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.hadoop</groupId>\n                <artifactId>hadoop-hdfs</artifactId>\n                <version>${hadoop.version}</version>\n                <exclusions>\n                    <exclusion>\n                        <artifactId>jsr311-api</artifactId>\n                        <groupId>javax.ws.rs</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-core</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-server</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.hadoop</groupId>\n                <artifactId>hadoop-mapreduce-client-core</artifactId>\n                <version>${hadoop.version}</version>\n                <exclusions>\n                    <exclusion>\n                        <artifactId>slf4j-log4j12</artifactId>\n                        <groupId>org.slf4j</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-client</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-server</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-servlet</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-core</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>jersey-json</artifactId>\n                        <groupId>com.sun.jersey</groupId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.hadoop</groupId>\n                <artifactId>hadoop-auth</artifactId>\n                <version>${hadoop.version}</version>\n                <exclusions>\n                    <exclusion>\n                        <artifactId>slf4j-log4j12</artifactId>\n                        <groupId>org.slf4j</groupId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-streaming-java_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-statebackend-rocksdb_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-core -->\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-clients_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-clients_2.12</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-connector-kafka-0.10_2.12</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-connector-filesystem_2.12</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-core</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.zookeeper</groupId>\n                <artifactId>zookeeper</artifactId>\n                <version>${zookeeper.version}</version>\n                <exclusions>\n                    <exclusion>\n                        <artifactId>slf4j-log4j12</artifactId>\n                        <groupId>org.slf4j</groupId>\n                    </exclusion>\n                    <exclusion>\n                        <artifactId>log4j</artifactId>\n                        <groupId>log4j</groupId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.kafka</groupId>\n                <artifactId>kafka-clients</artifactId>\n                <version>${kafka.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.projectlombok</groupId>\n                <artifactId>lombok</artifactId>\n                <version>${lombok.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.commons</groupId>\n                <artifactId>commons-lang3</artifactId>\n                <version>${common-lang3.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.curator</groupId>\n                <artifactId>curator-recipes</artifactId>\n                <version>${curator-recipes.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>ch.qos.logback</groupId>\n                <artifactId>logback-classic</artifactId>\n                <version>${logback.version}</version>\n                <exclusions>\n                    <exclusion>\n                        <artifactId>org.slf4j</artifactId>\n                        <groupId>slf4j-api</groupId>\n                    </exclusion>\n                </exclusions>\n            </dependency>\n\n            <dependency>\n                <groupId>io.github.resilience4j</groupId>\n                <artifactId>resilience4j-retry</artifactId>\n                <version>${resilience4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>io.github.resilience4j</groupId>\n                <artifactId>resilience4j-circuitbreaker</artifactId>\n                <version>${resilience4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>io.github.resilience4j</groupId>\n                <artifactId>resilience4j-ratelimiter</artifactId>\n                <version>${resilience4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>io.github.resilience4j</groupId>\n                <artifactId>resilience4j-bulkhead</artifactId>\n                <version>${resilience4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>io.github.resilience4j</groupId>\n                <artifactId>resilience4j-annotations</artifactId>\n                <version>${resilience4j.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>io.github.resilience4j</groupId>\n                <artifactId>resilience4j-timelimiter</artifactId>\n                <version>${resilience4j.version}</version>\n            </dependency>\n\n\n            <dependency>\n                <groupId>commons-dbcp</groupId>\n                <artifactId>commons-dbcp</artifactId>\n                <version>${commons-dbcp.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>com.h2database</groupId>\n                <artifactId>h2</artifactId>\n                <version>${h2.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>mysql</groupId>\n                <artifactId>mysql-connector-java</artifactId>\n                <version>${mysql.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.httpcomponents</groupId>\n                <artifactId>httpclient</artifactId>\n                <version>4.5.10</version>\n                <scope>compile</scope>\n            </dependency>\n\n            <dependency>\n                <groupId>joda-time</groupId>\n                <artifactId>joda-time</artifactId>\n                <!-- managed version -->\n                <scope>provided</scope>\n                <!-- Avro records can contain JodaTime fields when using logical fields.\n                    In order to handle them, we need to add an optional dependency.\n                    Users with those Avro records need to add this dependency themselves. -->\n                <optional>true</optional>\n                <version>${joda-time.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>com.google.protobuf</groupId>\n                <artifactId>protobuf-java</artifactId>\n                <version>${protobuf-java.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/com.github.rholder/guava-retrying -->\n            <dependency>\n                <groupId>com.github.rholder</groupId>\n                <artifactId>guava-retrying</artifactId>\n                <version>${guava.retrying.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>com.google.guava</groupId>\n                <artifactId>guava</artifactId>\n                <version>${guava.version}</version>\n                <exclusions>\n\n                </exclusions>\n            </dependency>\n\n            <dependency>\n                <groupId>org.projectlombok</groupId>\n                <artifactId>lombok</artifactId>\n                <version>${lombok.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-java</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-streaming-java_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-clients_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.mvel/mvel2 -->\n            <dependency>\n                <groupId>org.mvel</groupId>\n                <artifactId>mvel2</artifactId>\n                <version>${mvel2.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->\n            <dependency>\n                <groupId>redis.clients</groupId>\n                <artifactId>jedis</artifactId>\n                <version>${jedis.version}</version>\n            </dependency>\n\n            <!-- 对zookeeper的底层api的一些封装 -->\n            <dependency>\n                <groupId>org.apache.curator</groupId>\n                <artifactId>curator-framework</artifactId>\n                <version>${curator.version}</version>\n            </dependency>\n            <!-- 封装了一些高级特性，如：Cache事件监听、选举、分布式锁、分布式Barrier -->\n            <dependency>\n                <groupId>org.apache.curator</groupId>\n                <artifactId>curator-recipes</artifactId>\n                <version>${curator.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-ant</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-cli-commons</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-cli-picocli</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-console</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-datetime</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-docgenerator</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-groovydoc</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-groovysh</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-jmx</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-json</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-jsr223</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-macro</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-nio</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-servlet</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-sql</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-swing</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-templates</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-test</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-test-junit5</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-testng</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n            <dependency>\n                <groupId>org.codehaus.groovy</groupId>\n                <artifactId>groovy-xml</artifactId>\n                <version>${groovy.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-table-planner_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>com.google.code.gson</groupId>\n                <artifactId>gson</artifactId>\n                <version>${gson.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-table-common</artifactId>\n                <version>${flink.version}</version>\n                <scope>compile</scope>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-table-api-java</artifactId>\n                <version>${flink.version}</version>\n                <scope>compile</scope>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-table-api-java-bridge_2.11</artifactId>\n                <version>${flink.version}</version>\n                <scope>compile</scope>\n            </dependency>\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-table-planner-blink_2.11</artifactId>\n                <version>${flink.version}</version>\n                <scope>compile</scope>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-jdbc -->\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-connector-jdbc_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-connector-hbase-2.2_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-streaming-scala_2.11</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-json</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis -->\n            <dependency>\n                <groupId>org.apache.bahir</groupId>\n                <artifactId>flink-connector-redis_2.10</artifactId>\n                <version>1.0</version>\n            </dependency>\n\n\n            <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka -->\n            <dependency>\n                <groupId>org.apache.flink</groupId>\n                <artifactId>flink-connector-kafka_2.12</artifactId>\n                <version>${flink.version}</version>\n            </dependency>\n\n\n            <dependency>\n                <groupId>ch.qos.logback</groupId>\n                <artifactId>logback-classic</artifactId>\n                <scope>compile</scope>\n                <version>${logback-classic.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->\n            <dependency>\n                <groupId>org.slf4j</groupId>\n                <artifactId>slf4j-log4j12</artifactId>\n                <version>${slf4j-log4j12.version}</version>\n            </dependency>\n\n\n            <dependency>\n\n                <groupId>org.apache.flink</groupId>\n\n                <artifactId>flink-runtime-web_2.11</artifactId>\n\n                <version>${flink.version}</version>\n\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind -->\n            <dependency>\n                <groupId>com.fasterxml.jackson.core</groupId>\n                <artifactId>jackson-databind</artifactId>\n                <version>${jackson.version}</version>\n            </dependency>\n\n            <dependency>\n                <groupId>com.fasterxml.jackson.core</groupId>\n                <artifactId>jackson-core</artifactId>\n                <version>${jackson.version}</version>\n\n            </dependency>\n\n            <dependency>\n                <groupId>com.fasterxml.jackson.core</groupId>\n                <artifactId>jackson-annotations</artifactId>\n                <version>${jackson.version}</version>\n\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-kotlin -->\n            <dependency>\n                <groupId>com.fasterxml.jackson.module</groupId>\n                <artifactId>jackson-module-kotlin</artifactId>\n                <version>${jackson.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.module/jackson-module-parameter-names -->\n            <dependency>\n                <groupId>com.fasterxml.jackson.module</groupId>\n                <artifactId>jackson-module-parameter-names</artifactId>\n                <version>${jackson.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/com.fasterxml.jackson.datatype/jackson-datatype-guava -->\n            <dependency>\n                <groupId>com.fasterxml.jackson.datatype</groupId>\n                <artifactId>jackson-datatype-guava</artifactId>\n                <version>${jackson.version}</version>\n            </dependency>\n\n\n            <!-- https://mvnrepository.com/artifact/com.hubspot.jackson/jackson-datatype-protobuf -->\n            <dependency>\n                <groupId>com.hubspot.jackson</groupId>\n                <artifactId>jackson-datatype-protobuf</artifactId>\n                <version>${jackson-datatype-protobuf.version}</version>\n            </dependency>\n\n            <!-- https://mvnrepository.com/artifact/org.apache.calcite/calcite-core -->\n            <dependency>\n                <groupId>org.apache.calcite</groupId>\n                <artifactId>calcite-core</artifactId>\n                <version>${calcite.version}</version>\n            </dependency>\n\n        </dependencies>\n    </dependencyManagement>\n\n    <build>\n\n        <pluginManagement>\n            <plugins>\n\n                <plugin>\n                    <groupId>org.apache.maven.plugins</groupId>\n                    <artifactId>maven-compiler-plugin</artifactId>\n                    <configuration>\n                        <source>8</source>\n                        <target>8</target>\n                    </configuration>\n                </plugin>\n\n                <plugin>\n                    <groupId>org.xolstice.maven.plugins</groupId>\n                    <artifactId>protobuf-maven-plugin</artifactId>\n                    <version>${protobuf-maven-plugin.version}</version>\n                    <configuration>\n                        <protoSourceRoot>\n                            src/test/proto\n                        </protoSourceRoot>\n                        <protoSourceRoot>\n                            src/main/proto\n                        </protoSourceRoot>\n                        <protocArtifact>\n                            com.google.protobuf:protoc:3.1.0:exe:${os.detected.classifier}\n                        </protocArtifact>\n                        <pluginId>grpc-java</pluginId>\n                        <pluginArtifact>\n                            io.grpc:protoc-gen-grpc-java:${grpc-plugin.version}:exe:${os.detected.classifier}\n                        </pluginArtifact>\n                    </configuration>\n                    <executions>\n                        <execution>\n                            <goals>\n                                <goal>compile</goal>\n                                <goal>compile-custom</goal>\n                            </goals>\n                        </execution>\n                    </executions>\n                </plugin>\n\n\n                <plugin>\n                    <!-- Extract parser grammar template from calcite-core.jar and put\n                         it under ${project.build.directory} where all freemarker templates are. -->\n                    <groupId>org.apache.maven.plugins</groupId>\n                    <artifactId>maven-dependency-plugin</artifactId>\n                    <executions>\n                        <execution>\n                            <id>unpack-parser-template</id>\n                            <phase>initialize</phase>\n                            <goals>\n                                <goal>unpack</goal>\n                            </goals>\n                            <configuration>\n                                <artifactItems>\n                                    <artifactItem>\n                                        <groupId>org.apache.calcite</groupId>\n                                        <artifactId>calcite-core</artifactId>\n                                        <type>jar</type>\n                                        <overWrite>true</overWrite>\n                                        <outputDirectory>${project.build.directory}/</outputDirectory>\n                                        <includes>**/Parser.jj</includes>\n                                    </artifactItem>\n                                </artifactItems>\n                            </configuration>\n                        </execution>\n                    </executions>\n                </plugin>\n                <!-- adding fmpp code gen -->\n                <plugin>\n                    <artifactId>maven-resources-plugin</artifactId>\n                    <executions>\n                        <execution>\n                            <id>copy-fmpp-resources</id>\n                            <phase>initialize</phase>\n                            <goals>\n                                <goal>copy-resources</goal>\n                            </goals>\n                            <configuration>\n                                <outputDirectory>${project.build.directory}/codegen</outputDirectory>\n                                <resources>\n                                    <resource>\n                                        <directory>src/main/codegen</directory>\n                                        <filtering>false</filtering>\n                                    </resource>\n                                </resources>\n                            </configuration>\n                        </execution>\n                    </executions>\n                </plugin>\n                <plugin>\n                    <groupId>com.googlecode.fmpp-maven-plugin</groupId>\n                    <artifactId>fmpp-maven-plugin</artifactId>\n                    <version>1.0</version>\n                    <dependencies>\n                        <dependency>\n                            <groupId>org.freemarker</groupId>\n                            <artifactId>freemarker</artifactId>\n                            <version>2.3.28</version>\n                        </dependency>\n                    </dependencies>\n                    <executions>\n                        <execution>\n                            <id>generate-fmpp-sources</id>\n                            <phase>generate-sources</phase>\n                            <goals>\n                                <goal>generate</goal>\n                            </goals>\n                            <configuration>\n                                <cfgFile>${project.build.directory}/codegen/config.fmpp</cfgFile>\n                                <outputDirectory>target/generated-sources</outputDirectory>\n                                <templateDirectory>${project.build.directory}/codegen/templates</templateDirectory>\n                            </configuration>\n                        </execution>\n                    </executions>\n                </plugin>\n                <plugin>\n                    <!-- This must be run AFTER the fmpp-maven-plugin -->\n                    <groupId>org.codehaus.mojo</groupId>\n                    <artifactId>javacc-maven-plugin</artifactId>\n                    <version>2.4</version>\n                    <executions>\n                        <execution>\n                            <phase>generate-sources</phase>\n                            <id>javacc</id>\n                            <goals>\n                                <goal>javacc</goal>\n                            </goals>\n                            <configuration>\n                                <sourceDirectory>${project.build.directory}/generated-sources/</sourceDirectory>\n                                <includes>\n                                    <include>**/Simple1.jj</include>\n                                </includes>\n                                <!-- This must be kept synced with Apache Calcite. -->\n                                <lookAhead>1</lookAhead>\n                                <isStatic>false</isStatic>\n                                <outputDirectory>${project.build.directory}/generated-sources/</outputDirectory>\n                            </configuration>\n                        </execution>\n                    </executions>\n                </plugin>\n                <plugin>\n                    <groupId>org.apache.maven.plugins</groupId>\n                    <artifactId>maven-surefire-plugin</artifactId>\n                    <configuration>\n                        <forkCount>1</forkCount>\n                        <reuseForks>false</reuseForks>\n                    </configuration>\n                </plugin>\n\n            </plugins>\n        </pluginManagement>\n    </build>\n\n\n</project>"
  }
]