Full Code of ZTO-Express/fire for AI

main f3984f90fc77 cached
728 files
4.6 MB
1.3M tokens
3524 symbols
1 requests
Download .txt
Showing preview only (5,239K chars total). Download the full file or copy to clipboard to get everything.
Repository: ZTO-Express/fire
Branch: main
Commit: f3984f90fc77
Files: 728
Total size: 4.6 MB

Directory structure:
gitextract_kfsomxpy/

├── .gitignore
├── LICENSE
├── README.md
├── docs/
│   ├── accumulator.md
│   ├── anno.md
│   ├── connector/
│   │   ├── adb.md
│   │   ├── clickhouse.md
│   │   ├── hbase.md
│   │   ├── hive.md
│   │   ├── jdbc.md
│   │   ├── kafka.md
│   │   ├── oracle.md
│   │   └── rocketmq.md
│   ├── datasource.md
│   ├── dev/
│   │   ├── config.md
│   │   ├── deploy-script.md
│   │   ├── engine-env.md
│   │   └── integration.md
│   ├── feature.md
│   ├── highlight/
│   │   ├── checkpoint.md
│   │   └── spark-duration.md
│   ├── index.md
│   ├── platform.md
│   ├── pom/
│   │   ├── flink-pom.xml
│   │   └── spark-pom.xml
│   ├── properties.md
│   ├── restful.md
│   ├── schedule.md
│   └── threadpool.md
├── fire-common/
│   ├── pom.xml
│   └── src/
│       ├── main/
│       │   ├── java/
│       │   │   └── com/
│       │   │       └── zto/
│       │   │           └── fire/
│       │   │               └── common/
│       │   │                   ├── anno/
│       │   │                   │   ├── Config.java
│       │   │                   │   ├── FieldName.java
│       │   │                   │   ├── FireConf.java
│       │   │                   │   ├── Internal.java
│       │   │                   │   ├── Rest.java
│       │   │                   │   ├── Scheduled.java
│       │   │                   │   └── TestStep.java
│       │   │                   ├── bean/
│       │   │                   │   ├── FireTask.java
│       │   │                   │   ├── analysis/
│       │   │                   │   │   └── ExceptionMsg.java
│       │   │                   │   ├── config/
│       │   │                   │   │   └── ConfigurationParam.java
│       │   │                   │   ├── lineage/
│       │   │                   │   │   ├── Lineage.java
│       │   │                   │   │   ├── SQLLineage.java
│       │   │                   │   │   ├── SQLTable.java
│       │   │                   │   │   ├── SQLTableColumns.java
│       │   │                   │   │   ├── SQLTablePartitions.java
│       │   │                   │   │   └── SQLTableRelations.java
│       │   │                   │   ├── rest/
│       │   │                   │   │   ├── ResultMsg.java
│       │   │                   │   │   └── yarn/
│       │   │                   │   │       └── App.java
│       │   │                   │   └── runtime/
│       │   │                   │       ├── ClassLoaderInfo.java
│       │   │                   │       ├── CpuInfo.java
│       │   │                   │       ├── DiskInfo.java
│       │   │                   │       ├── DisplayInfo.java
│       │   │                   │       ├── HardwareInfo.java
│       │   │                   │       ├── JvmInfo.java
│       │   │                   │       ├── MemoryInfo.java
│       │   │                   │       ├── NetworkInfo.java
│       │   │                   │       ├── OSInfo.java
│       │   │                   │       ├── RuntimeInfo.java
│       │   │                   │       ├── ThreadInfo.java
│       │   │                   │       └── UsbInfo.java
│       │   │                   ├── enu/
│       │   │                   │   ├── ConfigureLevel.java
│       │   │                   │   ├── Datasource.java
│       │   │                   │   ├── ErrorCode.java
│       │   │                   │   ├── JdbcDriver.java
│       │   │                   │   ├── JobType.java
│       │   │                   │   ├── Operation.java
│       │   │                   │   ├── RequestMethod.scala
│       │   │                   │   ├── ThreadPoolType.java
│       │   │                   │   └── YarnState.java
│       │   │                   ├── exception/
│       │   │                   │   ├── FireException.java
│       │   │                   │   ├── FireFlinkException.java
│       │   │                   │   └── FireSparkException.java
│       │   │                   └── util/
│       │   │                       ├── EncryptUtils.java
│       │   │                       ├── FileUtils.java
│       │   │                       ├── FindClassUtils.java
│       │   │                       ├── HttpClientUtils.java
│       │   │                       ├── IOUtils.java
│       │   │                       ├── MathUtils.java
│       │   │                       ├── OSUtils.java
│       │   │                       ├── ProcessUtil.java
│       │   │                       ├── ReflectionUtils.java
│       │   │                       ├── StringsUtils.java
│       │   │                       ├── UnitFormatUtils.java
│       │   │                       └── YarnUtils.java
│       │   ├── resources/
│       │   │   └── log4j.properties
│       │   └── scala/
│       │       └── com/
│       │           └── zto/
│       │               └── fire/
│       │                   └── common/
│       │                       ├── bean/
│       │                       │   └── TableIdentifier.scala
│       │                       ├── conf/
│       │                       │   ├── FireConf.scala
│       │                       │   ├── FireFrameworkConf.scala
│       │                       │   ├── FireHDFSConf.scala
│       │                       │   ├── FireHiveConf.scala
│       │                       │   ├── FireKafkaConf.scala
│       │                       │   ├── FirePS1Conf.scala
│       │                       │   ├── FireRocketMQConf.scala
│       │                       │   └── KeyNum.scala
│       │                       ├── ext/
│       │                       │   ├── JavaExt.scala
│       │                       │   └── ScalaExt.scala
│       │                       ├── package.scala
│       │                       └── util/
│       │                           ├── ConfigurationCenterManager.scala
│       │                           ├── DateFormatUtils.scala
│       │                           ├── ExceptionBus.scala
│       │                           ├── FireFunctions.scala
│       │                           ├── FireUtils.scala
│       │                           ├── JSONUtils.scala
│       │                           ├── JavaTypeMap.scala
│       │                           ├── KafkaUtils.scala
│       │                           ├── LineageManager.scala
│       │                           ├── LogUtils.scala
│       │                           ├── Logging.scala
│       │                           ├── MQProducer.scala
│       │                           ├── NumberFormatUtils.scala
│       │                           ├── PropUtils.scala
│       │                           ├── RegularUtils.scala
│       │                           ├── SQLLineageManager.scala
│       │                           ├── SQLUtils.scala
│       │                           ├── ScalaUtils.scala
│       │                           ├── ShutdownHookManager.scala
│       │                           ├── ThreadUtils.scala
│       │                           ├── Tools.scala
│       │                           └── ValueUtils.scala
│       └── test/
│           └── scala/
│               └── com/
│                   └── zto/
│                       └── fire/
│                           └── common/
│                               └── util/
│                                   ├── RegularUtilsUnitTest.scala
│                                   ├── SQLUtilsTest.scala
│                                   ├── ShutdownHookManagerTest.scala
│                                   └── ValueUtilsTest.scala
├── fire-connectors/
│   ├── .gitignore
│   ├── base-connectors/
│   │   ├── fire-hbase/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── java/
│   │   │           │   └── com/
│   │   │           │       └── zto/
│   │   │           │           └── fire/
│   │   │           │               └── hbase/
│   │   │           │                   └── anno/
│   │   │           │                       └── HConfig.java
│   │   │           └── scala/
│   │   │               └── com/
│   │   │                   └── zto/
│   │   │                       └── fire/
│   │   │                           └── hbase/
│   │   │                               ├── HBaseConnector.scala
│   │   │                               ├── HBaseFunctions.scala
│   │   │                               ├── bean/
│   │   │                               │   ├── HBaseBaseBean.java
│   │   │                               │   └── MultiVersionsBean.java
│   │   │                               ├── conf/
│   │   │                               │   └── FireHBaseConf.scala
│   │   │                               └── utils/
│   │   │                                   └── HBaseUtils.scala
│   │   ├── fire-jdbc/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── resources/
│   │   │           │   └── driver.properties
│   │   │           └── scala/
│   │   │               └── com/
│   │   │                   └── zto/
│   │   │                       └── fire/
│   │   │                           └── jdbc/
│   │   │                               ├── JdbcConnector.scala
│   │   │                               ├── JdbcConnectorBridge.scala
│   │   │                               ├── JdbcFunctions.scala
│   │   │                               ├── conf/
│   │   │                               │   └── FireJdbcConf.scala
│   │   │                               └── util/
│   │   │                                   └── DBUtils.scala
│   │   └── pom.xml
│   ├── flink-connectors/
│   │   ├── flink-clickhouse/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── java-flink-1.14/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── flink/
│   │   │           │               └── connector/
│   │   │           │                   └── clickhouse/
│   │   │           │                       ├── ClickHouseDynamicTableFactory.java
│   │   │           │                       ├── ClickHouseDynamicTableSink.java
│   │   │           │                       ├── ClickHouseDynamicTableSource.java
│   │   │           │                       ├── catalog/
│   │   │           │                       │   ├── ClickHouseCatalog.java
│   │   │           │                       │   └── ClickHouseCatalogFactory.java
│   │   │           │                       ├── config/
│   │   │           │                       │   ├── ClickHouseConfig.java
│   │   │           │                       │   └── ClickHouseConfigOptions.java
│   │   │           │                       ├── internal/
│   │   │           │                       │   ├── AbstractClickHouseInputFormat.java
│   │   │           │                       │   ├── AbstractClickHouseOutputFormat.java
│   │   │           │                       │   ├── ClickHouseBatchInputFormat.java
│   │   │           │                       │   ├── ClickHouseBatchOutputFormat.java
│   │   │           │                       │   ├── ClickHouseShardInputFormat.java
│   │   │           │                       │   ├── ClickHouseShardOutputFormat.java
│   │   │           │                       │   ├── ClickHouseStatementFactory.java
│   │   │           │                       │   ├── common/
│   │   │           │                       │   │   └── DistributedEngineFullSchema.java
│   │   │           │                       │   ├── connection/
│   │   │           │                       │   │   └── ClickHouseConnectionProvider.java
│   │   │           │                       │   ├── converter/
│   │   │           │                       │   │   ├── ClickHouseConverterUtils.java
│   │   │           │                       │   │   └── ClickHouseRowConverter.java
│   │   │           │                       │   ├── executor/
│   │   │           │                       │   │   ├── ClickHouseBatchExecutor.java
│   │   │           │                       │   │   ├── ClickHouseExecutor.java
│   │   │           │                       │   │   └── ClickHouseUpsertExecutor.java
│   │   │           │                       │   ├── options/
│   │   │           │                       │   │   ├── ClickHouseConnectionOptions.java
│   │   │           │                       │   │   ├── ClickHouseDmlOptions.java
│   │   │           │                       │   │   └── ClickHouseReadOptions.java
│   │   │           │                       │   └── partitioner/
│   │   │           │                       │       ├── BalancedPartitioner.java
│   │   │           │                       │       ├── ClickHousePartitioner.java
│   │   │           │                       │       ├── HashPartitioner.java
│   │   │           │                       │       └── ShufflePartitioner.java
│   │   │           │                       ├── split/
│   │   │           │                       │   ├── ClickHouseBatchBetweenParametersProvider.java
│   │   │           │                       │   ├── ClickHouseBetweenParametersProvider.java
│   │   │           │                       │   ├── ClickHouseParametersProvider.java
│   │   │           │                       │   ├── ClickHouseShardBetweenParametersProvider.java
│   │   │           │                       │   └── ClickHouseShardTableParametersProvider.java
│   │   │           │                       └── util/
│   │   │           │                           ├── ClickHouseTypeUtil.java
│   │   │           │                           ├── ClickHouseUtil.java
│   │   │           │                           ├── FilterPushDownHelper.java
│   │   │           │                           └── SqlClause.java
│   │   │           └── resources/
│   │   │               └── META-INF/
│   │   │                   └── services/
│   │   │                       └── org.apache.flink.table.factories.Factory
│   │   ├── flink-es/
│   │   │   └── pom.xml
│   │   ├── flink-rocketmq/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── java/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQConfig.java
│   │   │           │                   ├── RocketMQSink.java
│   │   │           │                   ├── RocketMQSinkWithTag.java
│   │   │           │                   ├── RocketMQSource.java
│   │   │           │                   ├── RocketMQUtils.java
│   │   │           │                   ├── RunningChecker.java
│   │   │           │                   └── common/
│   │   │           │                       ├── selector/
│   │   │           │                       │   ├── DefaultTopicSelector.java
│   │   │           │                       │   ├── SimpleTopicSelector.java
│   │   │           │                       │   └── TopicSelector.java
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonSerializationSchema.java
│   │   │           │                           ├── KeyValueDeserializationSchema.java
│   │   │           │                           ├── KeyValueSerializationSchema.java
│   │   │           │                           ├── SimpleKeyValueDeserializationSchema.java
│   │   │           │                           ├── SimpleKeyValueSerializationSchema.java
│   │   │           │                           └── TagKeyValueSerializationSchema.java
│   │   │           ├── java-flink-1.12/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQSourceWithTag.java
│   │   │           │                   └── common/
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonDeserializationSchema.java
│   │   │           │                           ├── SimpleTagKeyValueDeserializationSchema.java
│   │   │           │                           └── TagKeyValueDeserializationSchema.java
│   │   │           ├── java-flink-1.13/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQSourceWithTag.java
│   │   │           │                   └── common/
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonDeserializationSchema.java
│   │   │           │                           ├── SimpleTagKeyValueDeserializationSchema.java
│   │   │           │                           └── TagKeyValueDeserializationSchema.java
│   │   │           ├── java-flink-1.14/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQSourceWithTag.java
│   │   │           │                   └── common/
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonDeserializationSchema.java
│   │   │           │                           ├── SimpleTagKeyValueDeserializationSchema.java
│   │   │           │                           └── TagKeyValueDeserializationSchema.java
│   │   │           ├── resources/
│   │   │           │   └── META-INF/
│   │   │           │       └── services/
│   │   │           │           └── org.apache.flink.table.factories.Factory
│   │   │           └── scala/
│   │   │               └── com/
│   │   │                   └── zto/
│   │   │                       └── fire/
│   │   │                           └── flink/
│   │   │                               └── sql/
│   │   │                                   └── connector/
│   │   │                                       └── rocketmq/
│   │   │                                           ├── RocketMQDynamicTableFactory.scala
│   │   │                                           ├── RocketMQDynamicTableSink.scala
│   │   │                                           ├── RocketMQDynamicTableSource.scala
│   │   │                                           └── RocketMQOptions.scala
│   │   └── pom.xml
│   ├── pom.xml
│   └── spark-connectors/
│       ├── pom.xml
│       ├── spark-hbase/
│       │   ├── pom.xml
│       │   └── src/
│       │       └── main/
│       │           ├── java/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           └── hadoop/
│       │           │               └── hbase/
│       │           │                   ├── client/
│       │           │                   │   ├── ConnFactoryExtend.java
│       │           │                   │   └── ConnectionFactory.java
│       │           │                   └── spark/
│       │           │                       ├── SparkSQLPushDownFilter.java
│       │           │                       ├── example/
│       │           │                       │   └── hbasecontext/
│       │           │                       │       ├── JavaHBaseBulkDeleteExample.java
│       │           │                       │       ├── JavaHBaseBulkGetExample.java
│       │           │                       │       ├── JavaHBaseBulkPutExample.java
│       │           │                       │       ├── JavaHBaseDistributedScan.java
│       │           │                       │       ├── JavaHBaseMapGetPutExample.java
│       │           │                       │       └── JavaHBaseStreamingBulkPutExample.java
│       │           │                       └── protobuf/
│       │           │                           └── generated/
│       │           │                               └── FilterProtos.java
│       │           ├── protobuf/
│       │           │   └── Filter.proto
│       │           ├── scala/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           └── hadoop/
│       │           │               └── hbase/
│       │           │                   └── spark/
│       │           │                       ├── BulkLoadPartitioner.scala
│       │           │                       ├── ByteArrayComparable.scala
│       │           │                       ├── ByteArrayWrapper.scala
│       │           │                       ├── ColumnFamilyQualifierMapKeyWrapper.scala
│       │           │                       ├── DefaultSource.scala
│       │           │                       ├── DynamicLogicExpression.scala
│       │           │                       ├── FamiliesQualifiersValues.scala
│       │           │                       ├── FamilyHFileWriteOptions.scala
│       │           │                       ├── HBaseContext.scala
│       │           │                       ├── HBaseDStreamFunctions.scala
│       │           │                       ├── HBaseRDDFunctions.scala
│       │           │                       ├── JavaHBaseContext.scala
│       │           │                       ├── KeyFamilyQualifier.scala
│       │           │                       ├── NewHBaseRDD.scala
│       │           │                       ├── datasources/
│       │           │                       │   ├── Bound.scala
│       │           │                       │   ├── HBaseResources.scala
│       │           │                       │   ├── HBaseSparkConf.scala
│       │           │                       │   ├── SerializableConfiguration.scala
│       │           │                       │   └── package.scala
│       │           │                       └── example/
│       │           │                           ├── hbasecontext/
│       │           │                           │   ├── HBaseBulkDeleteExample.scala
│       │           │                           │   ├── HBaseBulkGetExample.scala
│       │           │                           │   ├── HBaseBulkPutExample.scala
│       │           │                           │   ├── HBaseBulkPutExampleFromFile.scala
│       │           │                           │   ├── HBaseBulkPutTimestampExample.scala
│       │           │                           │   ├── HBaseDistributedScanExample.scala
│       │           │                           │   └── HBaseStreamingBulkPutExample.scala
│       │           │                           └── rdd/
│       │           │                               ├── HBaseBulkDeleteExample.scala
│       │           │                               ├── HBaseBulkGetExample.scala
│       │           │                               ├── HBaseBulkPutExample.scala
│       │           │                               ├── HBaseForeachPartitionExample.scala
│       │           │                               └── HBaseMapPartitionExample.scala
│       │           ├── scala-spark-2.3/
│       │           │   └── apache/
│       │           │       └── hadoop/
│       │           │           └── hbase/
│       │           │               └── spark/
│       │           │                   └── datasources/
│       │           │                       └── HBaseTableScanRDD.scala
│       │           ├── scala-spark-2.4/
│       │           │   └── apache/
│       │           │       └── hadoop/
│       │           │           └── hbase/
│       │           │               └── spark/
│       │           │                   └── datasources/
│       │           │                       └── HBaseTableScanRDD.scala
│       │           ├── scala-spark-3.0/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           ├── hadoop/
│       │           │           │   └── hbase/
│       │           │           │       └── spark/
│       │           │           │           └── datasources/
│       │           │           │               └── HBaseTableScanRDD.scala
│       │           │           └── spark/
│       │           │               └── deploy/
│       │           │                   └── SparkHadoopUtil.scala
│       │           ├── scala-spark-3.1/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           ├── hadoop/
│       │           │           │   └── hbase/
│       │           │           │       └── spark/
│       │           │           │           └── datasources/
│       │           │           │               └── HBaseTableScanRDD.scala
│       │           │           └── spark/
│       │           │               └── deploy/
│       │           │                   └── SparkHadoopUtil.scala
│       │           ├── scala-spark-3.2/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           ├── hadoop/
│       │           │           │   └── hbase/
│       │           │           │       └── spark/
│       │           │           │           └── datasources/
│       │           │           │               └── HBaseTableScanRDD.scala
│       │           │           └── spark/
│       │           │               └── deploy/
│       │           │                   └── SparkHadoopUtil.scala
│       │           └── scala-spark-3.3/
│       │               └── org/
│       │                   └── apache/
│       │                       ├── hadoop/
│       │                       │   └── hbase/
│       │                       │       └── spark/
│       │                       │           └── datasources/
│       │                       │               └── HBaseTableScanRDD.scala
│       │                       └── spark/
│       │                           └── deploy/
│       │                               └── SparkHadoopUtil.scala
│       └── spark-rocketmq/
│           ├── pom.xml
│           └── src/
│               └── main/
│                   ├── java/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── rocketmq/
│                   │               └── spark/
│                   │                   ├── OffsetCommitCallback.java
│                   │                   ├── RocketMQConfig.java
│                   │                   ├── TopicQueueId.java
│                   │                   └── streaming/
│                   │                       ├── DefaultMessageRetryManager.java
│                   │                       ├── MessageRetryManager.java
│                   │                       ├── MessageSet.java
│                   │                       ├── ReliableRocketMQReceiver.java
│                   │                       └── RocketMQReceiver.java
│                   ├── scala/
│                   │   └── org/
│                   │       └── apache/
│                   │           ├── rocketmq/
│                   │           │   └── spark/
│                   │           │       ├── CachedMQConsumer.scala
│                   │           │       ├── ConsumerStrategy.scala
│                   │           │       ├── LocationStrategy.scala
│                   │           │       ├── Logging.scala
│                   │           │       ├── OffsetRange.scala
│                   │           │       ├── RocketMqRDDPartition.scala
│                   │           │       └── RocketMqUtils.scala
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── CachedRocketMQConsumer.scala
│                   │               │       ├── CachedRocketMQProducer.scala
│                   │               │       ├── JsonUtils.scala
│                   │               │       ├── RocketMQConf.scala
│                   │               │       ├── RocketMQOffsetRangeLimit.scala
│                   │               │       ├── RocketMQOffsetReader.scala
│                   │               │       ├── RocketMQRelation.scala
│                   │               │       ├── RocketMQSink.scala
│                   │               │       ├── RocketMQSourceProvider.scala
│                   │               │       ├── RocketMQUtils.scala
│                   │               │       ├── RocketMQWriteTask.scala
│                   │               │       └── RocketMQWriter.scala
│                   │               └── streaming/
│                   │                   └── MQPullInputDStream.scala
│                   ├── scala-spark-2.3/
│                   │   ├── org/
│                   │   │   └── apache/
│                   │   │       └── spark/
│                   │   │           └── sql/
│                   │   │               └── rocketmq/
│                   │   │                   ├── RocketMQSource.scala
│                   │   │                   ├── RocketMQSourceOffset.scala
│                   │   │                   └── RocketMQSourceRDDOffsetRange.scala
│                   │   └── org.apache.spark.streaming/
│                   │       └── RocketMqRDD.scala
│                   ├── scala-spark-2.4/
│                   │   ├── org/
│                   │   │   └── apache/
│                   │   │       └── spark/
│                   │   │           └── sql/
│                   │   │               └── rocketmq/
│                   │   │                   ├── RocketMQSource.scala
│                   │   │                   ├── RocketMQSourceOffset.scala
│                   │   │                   └── RocketMQSourceRDDOffsetRange.scala
│                   │   └── org.apache.spark.streaming/
│                   │       └── RocketMqRDD.scala
│                   ├── scala-spark-3.0/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── RocketMQSource.scala
│                   │               │       ├── RocketMQSourceOffset.scala
│                   │               │       └── RocketMQSourceRDD.scala
│                   │               └── streaming/
│                   │                   └── RocketMqRDD.scala
│                   ├── scala-spark-3.1/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── RocketMQSource.scala
│                   │               │       ├── RocketMQSourceOffset.scala
│                   │               │       └── RocketMQSourceRDD.scala
│                   │               └── streaming/
│                   │                   └── RocketMqRDD.scala
│                   ├── scala-spark-3.2/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── RocketMQSource.scala
│                   │               │       ├── RocketMQSourceOffset.scala
│                   │               │       └── RocketMQSourceRDD.scala
│                   │               └── streaming/
│                   │                   └── RocketMqRDD.scala
│                   └── scala-spark-3.3/
│                       └── org/
│                           └── apache/
│                               └── spark/
│                                   ├── sql/
│                                   │   └── rocketmq/
│                                   │       ├── RocketMQSource.scala
│                                   │       ├── RocketMQSourceOffset.scala
│                                   │       └── RocketMQSourceRDD.scala
│                                   └── streaming/
│                                       └── RocketMqRDD.scala
├── fire-core/
│   ├── pom.xml
│   └── src/
│       └── main/
│           ├── java/
│           │   └── com/
│           │       └── zto/
│           │           └── fire/
│           │               └── core/
│           │                   ├── TimeCost.java
│           │                   ├── anno/
│           │                   │   ├── connector/
│           │                   │   │   ├── HBase.java
│           │                   │   │   ├── HBase2.java
│           │                   │   │   ├── HBase3.java
│           │                   │   │   ├── HBase4.java
│           │                   │   │   ├── HBase5.java
│           │                   │   │   ├── Hive.java
│           │                   │   │   ├── Jdbc.java
│           │                   │   │   ├── Jdbc2.java
│           │                   │   │   ├── Jdbc3.java
│           │                   │   │   ├── Jdbc4.java
│           │                   │   │   ├── Jdbc5.java
│           │                   │   │   ├── Kafka.java
│           │                   │   │   ├── Kafka2.java
│           │                   │   │   ├── Kafka3.java
│           │                   │   │   ├── Kafka4.java
│           │                   │   │   ├── Kafka5.java
│           │                   │   │   ├── RocketMQ.java
│           │                   │   │   ├── RocketMQ2.java
│           │                   │   │   ├── RocketMQ3.java
│           │                   │   │   ├── RocketMQ4.java
│           │                   │   │   └── RocketMQ5.java
│           │                   │   └── lifecycle/
│           │                   │       ├── After.java
│           │                   │       ├── Before.java
│           │                   │       ├── Handle.java
│           │                   │       ├── Process.java
│           │                   │       ├── Step1.java
│           │                   │       ├── Step10.java
│           │                   │       ├── Step11.java
│           │                   │       ├── Step12.java
│           │                   │       ├── Step13.java
│           │                   │       ├── Step14.java
│           │                   │       ├── Step15.java
│           │                   │       ├── Step16.java
│           │                   │       ├── Step17.java
│           │                   │       ├── Step18.java
│           │                   │       ├── Step19.java
│           │                   │       ├── Step2.java
│           │                   │       ├── Step3.java
│           │                   │       ├── Step4.java
│           │                   │       ├── Step5.java
│           │                   │       ├── Step6.java
│           │                   │       ├── Step7.java
│           │                   │       ├── Step8.java
│           │                   │       └── Step9.java
│           │                   ├── bean/
│           │                   │   └── ArthasParam.java
│           │                   └── task/
│           │                       ├── SchedulerManager.java
│           │                       ├── TaskRunner.java
│           │                       └── TaskRunnerQueue.java
│           ├── resources/
│           │   ├── cluster.properties
│           │   └── fire.properties
│           └── scala/
│               └── com/
│                   └── zto/
│                       └── fire/
│                           └── core/
│                               ├── Api.scala
│                               ├── BaseFire.scala
│                               ├── conf/
│                               │   └── AnnoManager.scala
│                               ├── connector/
│                               │   └── Connector.scala
│                               ├── ext/
│                               │   ├── BaseFireExt.scala
│                               │   └── Provider.scala
│                               ├── plugin/
│                               │   ├── ArthasDynamicLauncher.scala
│                               │   ├── ArthasLauncher.scala
│                               │   └── ArthasManager.scala
│                               ├── rest/
│                               │   ├── RestCase.scala
│                               │   ├── RestServerManager.scala
│                               │   └── SystemRestful.scala
│                               ├── sql/
│                               │   ├── SqlExtensionsParser.scala
│                               │   └── SqlParser.scala
│                               ├── sync/
│                               │   ├── LineageAccumulatorManager.scala
│                               │   ├── SyncEngineConf.scala
│                               │   └── SyncManager.scala
│                               ├── task/
│                               │   └── FireInternalTask.scala
│                               └── util/
│                                   └── SingletonFactory.scala
├── fire-engines/
│   ├── .gitignore
│   ├── fire-flink/
│   │   ├── .gitignore
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── flink/
│   │           │                   ├── anno/
│   │           │                   │   ├── Checkpoint.java
│   │           │                   │   ├── FlinkConf.java
│   │           │                   │   └── Streaming.java
│   │           │                   ├── bean/
│   │           │                   │   ├── CheckpointParams.java
│   │           │                   │   ├── DistributeBean.java
│   │           │                   │   └── FlinkTableSchema.java
│   │           │                   ├── enu/
│   │           │                   │   └── DistributeModule.java
│   │           │                   ├── ext/
│   │           │                   │   └── watermark/
│   │           │                   │       └── FirePeriodicWatermarks.java
│   │           │                   ├── sink/
│   │           │                   │   ├── BaseSink.scala
│   │           │                   │   ├── HBaseSink.scala
│   │           │                   │   └── JdbcSink.scala
│   │           │                   └── task/
│   │           │                       └── FlinkSchedulerManager.java
│   │           ├── resources/
│   │           │   ├── META-INF/
│   │           │   │   └── services/
│   │           │   │       └── org.apache.flink.table.factories.Factory
│   │           │   ├── flink-batch.properties
│   │           │   ├── flink-streaming.properties
│   │           │   └── flink.properties
│   │           ├── scala/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           ├── fire/
│   │           │           │   └── flink/
│   │           │           │       ├── BaseFlink.scala
│   │           │           │       ├── BaseFlinkBatch.scala
│   │           │           │       ├── BaseFlinkCore.scala
│   │           │           │       ├── BaseFlinkStreaming.scala
│   │           │           │       ├── FlinkBatch.scala
│   │           │           │       ├── FlinkCore.scala
│   │           │           │       ├── FlinkStreaming.scala
│   │           │           │       ├── acc/
│   │           │           │       │   └── MultiCounterAccumulator.scala
│   │           │           │       ├── conf/
│   │           │           │       │   ├── FireFlinkConf.scala
│   │           │           │       │   └── FlinkAnnoManager.scala
│   │           │           │       ├── ext/
│   │           │           │       │   ├── batch/
│   │           │           │       │   │   ├── BatchExecutionEnvExt.scala
│   │           │           │       │   │   ├── BatchTableEnvExt.scala
│   │           │           │       │   │   └── DataSetExt.scala
│   │           │           │       │   ├── function/
│   │           │           │       │   │   ├── RichFunctionExt.scala
│   │           │           │       │   │   └── RuntimeContextExt.scala
│   │           │           │       │   ├── provider/
│   │           │           │       │   │   ├── HBaseConnectorProvider.scala
│   │           │           │       │   │   └── JdbcFlinkProvider.scala
│   │           │           │       │   └── stream/
│   │           │           │       │       ├── DataStreamExt.scala
│   │           │           │       │       ├── KeyedStreamExt.scala
│   │           │           │       │       ├── RowExt.scala
│   │           │           │       │       ├── SQLExt.scala
│   │           │           │       │       ├── StreamExecutionEnvExt.scala
│   │           │           │       │       ├── TableEnvExt.scala
│   │           │           │       │       ├── TableExt.scala
│   │           │           │       │       └── TableResultImplExt.scala
│   │           │           │       ├── plugin/
│   │           │           │       │   └── FlinkArthasLauncher.scala
│   │           │           │       ├── rest/
│   │           │           │       │   └── FlinkSystemRestful.scala
│   │           │           │       ├── sql/
│   │           │           │       │   ├── FlinkSqlExtensionsParser.scala
│   │           │           │       │   └── FlinkSqlParserBase.scala
│   │           │           │       ├── sync/
│   │           │           │       │   ├── DistributeSyncManager.scala
│   │           │           │       │   ├── FlinkLineageAccumulatorManager.scala
│   │           │           │       │   └── SyncFlinkEngine.scala
│   │           │           │       ├── task/
│   │           │           │       │   └── FlinkInternalTask.scala
│   │           │           │       └── util/
│   │           │           │           ├── FlinkSingletonFactory.scala
│   │           │           │           ├── FlinkUtils.scala
│   │           │           │           ├── HivePartitionTimeExtractor.scala
│   │           │           │           ├── RocketMQUtils.scala
│   │           │           │           └── StateCleanerUtils.scala
│   │           │           └── fire.scala
│   │           ├── scala-flink-1.12/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── flink/
│   │           │                   └── sql/
│   │           │                       └── FlinkSqlParser.scala
│   │           ├── scala-flink-1.13/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── flink/
│   │           │                   └── sql/
│   │           │                       └── FlinkSqlParser.scala
│   │           └── scala-flink-1.14/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── flink/
│   │                               └── sql/
│   │                                   └── FlinkSqlParser.scala
│   ├── fire-spark/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   ├── anno/
│   │           │                   │   ├── SparkConf.java
│   │           │                   │   ├── Streaming.java
│   │           │                   │   └── StreamingDuration.java
│   │           │                   ├── bean/
│   │           │                   │   ├── ColumnMeta.java
│   │           │                   │   ├── FunctionMeta.java
│   │           │                   │   ├── GenerateBean.java
│   │           │                   │   ├── RestartParams.java
│   │           │                   │   ├── SparkInfo.java
│   │           │                   │   └── TableMeta.java
│   │           │                   └── task/
│   │           │                       └── SparkSchedulerManager.java
│   │           ├── resources/
│   │           │   ├── spark-core.properties
│   │           │   ├── spark-streaming.properties
│   │           │   ├── spark.properties
│   │           │   └── structured-streaming.properties
│   │           ├── scala/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           ├── fire/
│   │           │           │   └── spark/
│   │           │           │       ├── BaseSpark.scala
│   │           │           │       ├── BaseSparkBatch.scala
│   │           │           │       ├── BaseSparkCore.scala
│   │           │           │       ├── BaseSparkStreaming.scala
│   │           │           │       ├── BaseStructuredStreaming.scala
│   │           │           │       ├── SparkBatch.scala
│   │           │           │       ├── SparkCore.scala
│   │           │           │       ├── SparkStreaming.scala
│   │           │           │       ├── StructuredStreaming.scala
│   │           │           │       ├── acc/
│   │           │           │       │   ├── AccumulatorManager.scala
│   │           │           │       │   ├── EnvironmentAccumulator.scala
│   │           │           │       │   ├── LineageAccumulator.scala
│   │           │           │       │   ├── LogAccumulator.scala
│   │           │           │       │   ├── MultiCounterAccumulator.scala
│   │           │           │       │   ├── MultiTimerAccumulator.scala
│   │           │           │       │   ├── StringAccumulator.scala
│   │           │           │       │   └── SyncAccumulator.scala
│   │           │           │       ├── conf/
│   │           │           │       │   ├── FireSparkConf.scala
│   │           │           │       │   └── SparkAnnoManager.scala
│   │           │           │       ├── connector/
│   │           │           │       │   ├── BeanGenReceiver.scala
│   │           │           │       │   ├── DataGenReceiver.scala
│   │           │           │       │   ├── HBaseBulkConnector.scala
│   │           │           │       │   ├── HBaseBulkFunctions.scala
│   │           │           │       │   └── HBaseSparkBridge.scala
│   │           │           │       ├── ext/
│   │           │           │       │   ├── core/
│   │           │           │       │   │   ├── DStreamExt.scala
│   │           │           │       │   │   ├── DataFrameExt.scala
│   │           │           │       │   │   ├── DatasetExt.scala
│   │           │           │       │   │   ├── RDDExt.scala
│   │           │           │       │   │   ├── SQLContextExt.scala
│   │           │           │       │   │   ├── SparkConfExt.scala
│   │           │           │       │   │   ├── SparkContextExt.scala
│   │           │           │       │   │   ├── SparkSessionExt.scala
│   │           │           │       │   │   └── StreamingContextExt.scala
│   │           │           │       │   └── provider/
│   │           │           │       │       ├── HBaseBulkProvider.scala
│   │           │           │       │       ├── HBaseConnectorProvider.scala
│   │           │           │       │       ├── HBaseHadoopProvider.scala
│   │           │           │       │       ├── JdbcSparkProvider.scala
│   │           │           │       │       ├── KafkaSparkProvider.scala
│   │           │           │       │       ├── SparkProvider.scala
│   │           │           │       │       └── SqlProvider.scala
│   │           │           │       ├── listener/
│   │           │           │       │   ├── FireSparkListener.scala
│   │           │           │       │   └── FireStreamingQueryListener.scala
│   │           │           │       ├── plugin/
│   │           │           │       │   └── SparkArthasLauncher.scala
│   │           │           │       ├── rest/
│   │           │           │       │   └── SparkSystemRestful.scala
│   │           │           │       ├── sink/
│   │           │           │       │   ├── FireSink.scala
│   │           │           │       │   └── JdbcStreamSink.scala
│   │           │           │       ├── sql/
│   │           │           │       │   ├── SparkSqlExtensionsParserBase.scala
│   │           │           │       │   ├── SparkSqlParserBase.scala
│   │           │           │       │   └── SqlExtensions.scala
│   │           │           │       ├── sync/
│   │           │           │       │   ├── DistributeSyncManager.scala
│   │           │           │       │   ├── SparkLineageAccumulatorManager.scala
│   │           │           │       │   └── SyncSparkEngine.scala
│   │           │           │       ├── task/
│   │           │           │       │   └── SparkInternalTask.scala
│   │           │           │       ├── udf/
│   │           │           │       │   └── UDFs.scala
│   │           │           │       └── util/
│   │           │           │           ├── RocketMQUtils.scala
│   │           │           │           ├── SparkSingletonFactory.scala
│   │           │           │           └── SparkUtils.scala
│   │           │           └── fire.scala
│   │           ├── scala-spark-2.3/
│   │           │   └── com.zto.fire.spark.sql/
│   │           │       ├── SparkSqlExtensionsParser.scala
│   │           │       └── SparkSqlParser.scala
│   │           ├── scala-spark-2.4/
│   │           │   └── com.zto.fire.spark.sql/
│   │           │       ├── SparkSqlExtensionsParser.scala
│   │           │       └── SparkSqlParser.scala
│   │           ├── scala-spark-3.0/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   └── sql/
│   │           │                       ├── SparkSqlExtensionsParser.scala
│   │           │                       └── SparkSqlParser.scala
│   │           ├── scala-spark-3.1/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   └── sql/
│   │           │                       ├── SparkSqlExtensionsParser.scala
│   │           │                       └── SparkSqlParser.scala
│   │           ├── scala-spark-3.2/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   └── sql/
│   │           │                       ├── SparkSqlExtensionsParser.scala
│   │           │                       └── SparkSqlParser.scala
│   │           └── scala-spark-3.3/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── spark/
│   │                               └── sql/
│   │                                   ├── SparkSqlExtensionsParser.scala
│   │                                   └── SparkSqlParser.scala
│   └── pom.xml
├── fire-enhance/
│   ├── apache-arthas/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           └── java/
│   │               └── com/
│   │                   └── taobao/
│   │                       └── arthas/
│   │                           └── agent/
│   │                               └── attach/
│   │                                   └── ArthasAgent.java
│   ├── apache-flink/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java-flink-1.12/
│   │           │   └── org/
│   │           │       ├── apache/
│   │           │       │   └── flink/
│   │           │       │       ├── client/
│   │           │       │       │   └── deployment/
│   │           │       │       │       └── application/
│   │           │       │       │           └── ApplicationDispatcherBootstrap.java
│   │           │       │       ├── configuration/
│   │           │       │       │   └── GlobalConfiguration.java
│   │           │       │       ├── contrib/
│   │           │       │       │   └── streaming/
│   │           │       │       │       └── state/
│   │           │       │       │           ├── RocksDBStateBackend.java
│   │           │       │       │           └── restore/
│   │           │       │       │               └── RocksDBFullRestoreOperation.java
│   │           │       │       ├── runtime/
│   │           │       │       │   ├── checkpoint/
│   │           │       │       │   │   └── CheckpointCoordinator.java
│   │           │       │       │   └── util/
│   │           │       │       │       └── EnvironmentInformation.java
│   │           │       │       ├── table/
│   │           │       │       │   └── api/
│   │           │       │       │       └── internal/
│   │           │       │       │           └── TableEnvironmentImpl.java
│   │           │       │       └── util/
│   │           │       │           └── ExceptionUtils.java
│   │           │       └── rocksdb/
│   │           │           └── RocksDB.java
│   │           ├── java-flink-1.13/
│   │           │   └── org/
│   │           │       └── apache/
│   │           │           └── flink/
│   │           │               ├── client/
│   │           │               │   └── deployment/
│   │           │               │       └── application/
│   │           │               │           └── ApplicationDispatcherBootstrap.java
│   │           │               ├── configuration/
│   │           │               │   └── GlobalConfiguration.java
│   │           │               ├── contrib/
│   │           │               │   └── streaming/
│   │           │               │       └── state/
│   │           │               │           └── EmbeddedRocksDBStateBackend.java
│   │           │               ├── runtime/
│   │           │               │   ├── checkpoint/
│   │           │               │   │   └── CheckpointCoordinator.java
│   │           │               │   └── util/
│   │           │               │       └── EnvironmentInformation.java
│   │           │               ├── table/
│   │           │               │   └── api/
│   │           │               │       └── internal/
│   │           │               │           └── TableEnvironmentImpl.java
│   │           │               └── util/
│   │           │                   └── ExceptionUtils.java
│   │           └── java-flink-1.14/
│   │               └── org/
│   │                   ├── apache/
│   │                   │   └── flink/
│   │                   │       ├── client/
│   │                   │       │   └── deployment/
│   │                   │       │       └── application/
│   │                   │       │           └── ApplicationDispatcherBootstrap.java
│   │                   │       ├── configuration/
│   │                   │       │   └── GlobalConfiguration.java
│   │                   │       ├── connector/
│   │                   │       │   └── jdbc/
│   │                   │       │       ├── dialect/
│   │                   │       │       │   ├── AdbDialect.java
│   │                   │       │       │   ├── JdbcDialect.java
│   │                   │       │       │   ├── JdbcDialects.java
│   │                   │       │       │   ├── MySQLDialect.java
│   │                   │       │       │   └── OracleSQLDialect.java
│   │                   │       │       └── internal/
│   │                   │       │           └── converter/
│   │                   │       │               └── OracleSQLRowConverter.java
│   │                   │       ├── contrib/
│   │                   │       │   └── streaming/
│   │                   │       │       └── state/
│   │                   │       │           └── EmbeddedRocksDBStateBackend.java
│   │                   │       ├── runtime/
│   │                   │       │   ├── checkpoint/
│   │                   │       │   │   └── CheckpointCoordinator.java
│   │                   │       │   └── util/
│   │                   │       │       └── EnvironmentInformation.java
│   │                   │       ├── streaming/
│   │                   │       │   └── connectors/
│   │                   │       │       └── kafka/
│   │                   │       │           ├── FlinkKafkaConsumer.java
│   │                   │       │           └── FlinkKafkaConsumerBase.java
│   │                   │       ├── table/
│   │                   │       │   └── api/
│   │                   │       │       └── internal/
│   │                   │       │           └── TableEnvironmentImpl.java
│   │                   │       └── util/
│   │                   │           └── ExceptionUtils.java
│   │                   └── rocksdb/
│   │                       └── RocksDB.java
│   ├── apache-spark/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           └── scala-spark-3.0/
│   │               └── org/
│   │                   └── apache/
│   │                       └── spark/
│   │                           ├── internal/
│   │                           │   └── config/
│   │                           │       └── Streaming.scala
│   │                           ├── sql/
│   │                           │   └── execution/
│   │                           │       └── datasources/
│   │                           │           └── InsertIntoHadoopFsRelationCommand.scala
│   │                           └── streaming/
│   │                               └── scheduler/
│   │                                   └── ExecutorAllocationManager.scala
│   └── pom.xml
├── fire-examples/
│   ├── flink-examples/
│   │   ├── pom.xml
│   │   └── src/
│   │       ├── main/
│   │       │   ├── java/
│   │       │   │   └── com/
│   │       │   │       └── zto/
│   │       │   │           └── fire/
│   │       │   │               ├── examples/
│   │       │   │               │   └── bean/
│   │       │   │               │       ├── People.java
│   │       │   │               │       └── Student.java
│   │       │   │               └── sql/
│   │       │   │                   └── SqlCommandParser.java
│   │       │   ├── resources/
│   │       │   │   ├── META-INF/
│   │       │   │   │   └── services/
│   │       │   │   │       └── org.apache.flink.table.factories.Factory
│   │       │   │   ├── common.properties
│   │       │   │   ├── connector/
│   │       │   │   │   └── hive/
│   │       │   │   │       └── HiveSinkTest.properties
│   │       │   │   ├── log4j.properties
│   │       │   │   └── stream/
│   │       │   │       └── ConfigCenterTest.properties
│   │       │   └── scala/
│   │       │       └── com/
│   │       │           └── zto/
│   │       │               └── fire/
│   │       │                   └── examples/
│   │       │                       └── flink/
│   │       │                           ├── FlinkDemo.scala
│   │       │                           ├── FlinkSQLDemo.scala
│   │       │                           ├── Test.scala
│   │       │                           ├── acc/
│   │       │                           │   └── FlinkAccTest.scala
│   │       │                           ├── batch/
│   │       │                           │   ├── FireMapFunctionTest.scala
│   │       │                           │   ├── FlinkBatchTest.scala
│   │       │                           │   └── FlinkBrocastTest.scala
│   │       │                           ├── connector/
│   │       │                           │   ├── FlinkHudiTest.scala
│   │       │                           │   ├── bean/
│   │       │                           │   │   ├── BeanConnectorTest.scala
│   │       │                           │   │   ├── BeanDynamicTableFactory.scala
│   │       │                           │   │   ├── BeanDynamicTableSink.scala
│   │       │                           │   │   ├── BeanDynamicTableSource.scala
│   │       │                           │   │   └── BeanOptions.scala
│   │       │                           │   ├── clickhouse/
│   │       │                           │   │   └── ClickhouseTest.scala
│   │       │                           │   ├── hive/
│   │       │                           │   │   ├── HiveBatchSinkTest.scala
│   │       │                           │   │   └── HiveSinkTest.scala
│   │       │                           │   ├── kafka/
│   │       │                           │   │   └── KafkaConsumer.scala
│   │       │                           │   ├── rocketmq/
│   │       │                           │   │   ├── RocketMQConnectorTest.scala
│   │       │                           │   │   └── RocketTest.scala
│   │       │                           │   └── sql/
│   │       │                           │       ├── DDL.scala
│   │       │                           │       └── DataGenTest.scala
│   │       │                           ├── lineage/
│   │       │                           │   ├── FlinkSqlLineageTest.scala
│   │       │                           │   └── LineageTest.scala
│   │       │                           ├── module/
│   │       │                           │   ├── ArthasTest.scala
│   │       │                           │   └── ExceptionTest.scala
│   │       │                           ├── sql/
│   │       │                           │   ├── HiveDimDemo.scala
│   │       │                           │   ├── HiveWriteDemo.scala
│   │       │                           │   ├── JdbcDimDemo.scala
│   │       │                           │   ├── RocketMQConnectorTest.scala
│   │       │                           │   ├── SimpleSqlDemo.scala
│   │       │                           │   └── SqlJoinDemo.scala
│   │       │                           ├── stream/
│   │       │                           │   ├── ConfigCenterTest.scala
│   │       │                           │   ├── FlinkHiveTest.scala
│   │       │                           │   ├── FlinkPartitioner.scala
│   │       │                           │   ├── FlinkRetractStreamTest.scala
│   │       │                           │   ├── FlinkSinkHiveTest.scala
│   │       │                           │   ├── FlinkSinkTest.scala
│   │       │                           │   ├── FlinkSourceTest.scala
│   │       │                           │   ├── FlinkStateTest.scala
│   │       │                           │   ├── HBaseTest.scala
│   │       │                           │   ├── HiveRW.scala
│   │       │                           │   ├── JdbcTest.scala
│   │       │                           │   ├── UDFTest.scala
│   │       │                           │   ├── WatermarkTest.scala
│   │       │                           │   └── WindowTest.scala
│   │       │                           └── util/
│   │       │                               └── StateCleaner.scala
│   │       └── test/
│   │           └── scala/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── examples/
│   │                               └── flink/
│   │                                   ├── anno/
│   │                                   │   └── AnnoConfTest.scala
│   │                                   ├── core/
│   │                                   │   └── BaseFlinkTester.scala
│   │                                   └── jdbc/
│   │                                       └── JdbcUnitTest.scala
│   ├── pom.xml
│   └── spark-examples/
│       ├── pom.xml
│       └── src/
│           ├── main/
│           │   ├── java/
│           │   │   └── com/
│           │   │       └── zto/
│           │   │           └── fire/
│           │   │               └── examples/
│           │   │                   └── bean/
│           │   │                       ├── Hudi.java
│           │   │                       ├── Student.java
│           │   │                       └── StudentMulti.java
│           │   ├── resources/
│           │   │   ├── common.properties
│           │   │   ├── jdbc/
│           │   │   │   └── JdbcTest.properties
│           │   │   └── streaming/
│           │   │       └── ConfigCenterTest.properties
│           │   └── scala/
│           │       └── com/
│           │           └── zto/
│           │               └── fire/
│           │                   └── examples/
│           │                       └── spark/
│           │                           ├── SparkDemo.scala
│           │                           ├── SparkSQLDemo.scala
│           │                           ├── Test.scala
│           │                           ├── acc/
│           │                           │   └── FireAccTest.scala
│           │                           ├── hbase/
│           │                           │   ├── HBaseConnectorTest.scala
│           │                           │   ├── HBaseHadoopTest.scala
│           │                           │   ├── HBaseStreamingTest.scala
│           │                           │   ├── HbaseBulkTest.scala
│           │                           │   └── HiveQL.scala
│           │                           ├── hive/
│           │                           │   ├── HiveClusterReader.scala
│           │                           │   ├── HiveMetadataTest.scala
│           │                           │   └── HiveRW.scala
│           │                           ├── jdbc/
│           │                           │   ├── JdbcStreamingTest.scala
│           │                           │   └── JdbcTest.scala
│           │                           ├── lineage/
│           │                           │   ├── DataSourceTest.scala
│           │                           │   ├── LineageTest.scala
│           │                           │   └── SparkCoreLineageTest.scala
│           │                           ├── module/
│           │                           │   ├── ArthasTest.scala
│           │                           │   └── ExceptionTest.scala
│           │                           ├── schedule/
│           │                           │   ├── ScheduleTest.scala
│           │                           │   └── Tasks.scala
│           │                           ├── sql/
│           │                           │   ├── LoadTestSQL.scala
│           │                           │   └── SparkSqlParseTest.scala
│           │                           ├── streaming/
│           │                           │   ├── AtLeastOnceTest.scala
│           │                           │   ├── ConfigCenterTest.scala
│           │                           │   ├── DataGenTest.scala
│           │                           │   ├── KafkaTest.scala
│           │                           │   └── RocketTest.scala
│           │                           ├── structured/
│           │                           │   ├── JdbcSinkTest.scala
│           │                           │   ├── MapTest.scala
│           │                           │   └── StructuredStreamingTest.scala
│           │                           └── thread/
│           │                               └── ThreadTest.scala
│           └── test/
│               ├── resources/
│               │   ├── ConfigCenterUnitTest.properties
│               │   ├── SparkSQLParserTest.properties
│               │   └── common.properties
│               ├── scala/
│               │   └── com/
│               │       └── zto/
│               │           └── fire/
│               │               └── examples/
│               │                   └── spark/
│               │                       ├── anno/
│               │                       │   └── AnnoConfTest.scala
│               │                       ├── conf/
│               │                       │   └── ConfigCenterUnitTest.scala
│               │                       ├── core/
│               │                       │   └── BaseSparkTester.scala
│               │                       ├── hbase/
│               │                       │   ├── HBaseApiTest.scala
│               │                       │   ├── HBaseBaseTester.scala
│               │                       │   ├── HBaseBulkUnitTest.scala
│               │                       │   ├── HBaseConnectorUnitTest.scala
│               │                       │   └── HBaseHadoopUnitTest.scala
│               │                       ├── hive/
│               │                       │   └── HiveUnitTest.scala
│               │                       ├── jdbc/
│               │                       │   ├── JdbcConnectorTest.scala
│               │                       │   └── JdbcUnitTest.scala
│               │                       └── parser/
│               │                           └── SparkSQLParserTest.scala
│               └── scala-spark-3.0/
│                   └── com/
│                       └── zto/
│                           └── fire/
│                               └── examples/
│                                   └── spark/
│                                       └── sql/
│                                           └── SparkSqlParseTest.scala
├── fire-external/
│   ├── .gitignore
│   ├── fire-apollo/
│   │   ├── pom.xml
│   │   └── src/
│   │       ├── main/
│   │       │   ├── resources/
│   │       │   │   └── apollo.properties
│   │       │   └── scala/
│   │       │       └── com/
│   │       │           └── zto/
│   │       │               └── fire/
│   │       │                   └── apollo/
│   │       │                       └── util/
│   │       │                           ├── ApolloConfigUtil.scala
│   │       │                           └── ApolloConstant.scala
│   │       └── test/
│   │           └── scala/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── apollo/
│   │                               └── util/
│   │                                   └── ApolloConfigUtilTest.scala
│   └── pom.xml
├── fire-metrics/
│   ├── pom.xml
│   └── src/
│       ├── main/
│       │   └── java/
│       │       └── com/
│       │           └── zto/
│       │               └── fire/
│       │                   └── metrics/
│       │                       └── MetricsDemo.scala
│       └── test/
│           ├── java/
│           │   └── com/
│           │       └── zto/
│           │           └── fire/
│           │               └── jmx/
│           │                   ├── Hello.java
│           │                   ├── HelloMBean.java
│           │                   ├── JmxApp.java
│           │                   ├── QueueSample.java
│           │                   ├── QueueSampler.java
│           │                   └── QueueSamplerMXBean.java
│           └── scala/
│               └── com.zto.fire.metrics/
│                   └── MetricsTest.scala
├── fire-platform/
│   └── pom.xml
├── fire-shell/
│   ├── flink-shell/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── org/
│   │           │       └── apache/
│   │           │           └── flink/
│   │           │               └── api/
│   │           │                   └── java/
│   │           │                       ├── JarHelper.java
│   │           │                       ├── ScalaShellEnvironment.java
│   │           │                       └── ScalaShellStreamEnvironment.java
│   │           ├── java-flink-1.12/
│   │           │   └── org.apache.flink.streaming.api.environment/
│   │           │       └── StreamExecutionEnvironment.java
│   │           ├── java-flink-1.13/
│   │           │   └── org.apache.flink.streaming.api.environment/
│   │           │       └── StreamExecutionEnvironment.java
│   │           └── scala/
│   │               ├── com/
│   │               │   └── zto/
│   │               │       └── fire/
│   │               │           └── shell/
│   │               │               └── flink/
│   │               │                   ├── FireILoop.scala
│   │               │                   └── Test.scala
│   │               └── org/
│   │                   └── apache/
│   │                       └── flink/
│   │                           └── api/
│   │                               └── scala/
│   │                                   └── FlinkShell.scala
│   ├── pom.xml
│   └── spark-shell/
│       ├── pom.xml
│       └── src/
│           └── main/
│               └── scala-spark-3.0/
│                   ├── com/
│                   │   └── zto/
│                   │       └── fire/
│                   │           └── shell/
│                   │               └── spark/
│                   │                   ├── FireILoop.scala
│                   │                   ├── Main.scala
│                   │                   └── Test.scala
│                   └── org/
│                       └── apache/
│                           └── spark/
│                               └── repl/
│                                   ├── ExecutorClassLoader.scala
│                                   └── Signaling.scala
└── pom.xml

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
.idea/*
fire-parent.iml
*.iml
target/
*.log

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# Fire框架

  Fire框架是由**中通大数据**自主研发并开源的、专门用于进行**Spark**和**Flink**任务开发的大数据框架。该框架屏蔽技术细节,提供大量简易API帮助开发者更快的构建实时计算任务。同时Fire框架也内置了平台化的功能,用于与实时平台集成。基于Fire框架的任务在中通每天处理的数据量高达**几千亿以上**,覆盖了**Spark计算**(离线&实时)、**Flink计算**等众多计算场景。

## 一、就这么简单!

### 1.1 Flink开发示例

```scala
@Config(
  """
    |state.checkpoints.num-retained=30 	# 支持任意Flink调优参数、Fire框架参数、用户自定义参数等
    |state.checkpoints.dir=hdfs:///user/flink/checkpoint
    |""")
@Hive("thrift://localhost:9083") // 配置连接到指定的hive
@Streaming(interval = 100, unaligned = true) // 100s做一次checkpoint,开启非对齐checkpoint
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object FlinkDemo extends FlinkStreaming {

  @Process
  def kafkaSource: Unit = {
    val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
    sql("""create table statement ...""")
    sql("""insert into statement ...""")
  }
}
```

### 1.2 Spark开发示例

```scala
@Config(
  """
    |spark.shuffle.compress=true		# 支持任意Spark调优参数、Fire框架参数、用户自定义参数等
    |spark.ui.enabled=true
    |""")
@Hive("thrift://localhost:9083") // 配置连接到指定的hive
@Streaming(interval = 100, maxRatePerPartition = 100) // 100s一个Streaming batch,并限制消费速率
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object SparkDemo extends SparkStreaming {

  @Process
  def kafkaSource: Unit = {
    val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
    sql("""select * from xxx""").show()
  }
}
```

***说明:structured streaming、spark core、flink sql、flink批任务均支持,代码结构与上述示例一致。***

## *[二、开发文档](./docs/index.md)*

## 三、亮点多多!

### 3.1 兼容主流版本

  fire框架适配了不同的spark与flink版本,支持spark2.x及以上所有版本,flink1.10及以上所有版本,支持基于scala2.11或scala2.12进行编译。

```shell
# 可根据实际需要选择不同的引擎版本进行fire框架的构建
mvn clean install -DskipTests -Pspark-3.0.2 -Pflink-1.14.3 -Pscala-2.12
```

| Apache Spark | Apache Flink |
| ------------ | ------------ |
| 2.3.x        | 1.10.x       |
| 2.4.x        | 1.11.x       |
| 3.0.x        | 1.12.x       |
| 3.1.x        | 1.13.x       |
| 3.2.x        | 1.14.x       |
| 3.3.x        | 1.15.x       |

### **3.2 简单好用**

  Fire框架高度封装,屏蔽大量技术细节,许多connector仅需一行代码即可完成主要功能。同时Fire框架统一了spark与flink两大引擎常用的api,使用统一的代码风格即可实现spark与flink的代码开发。

- **HBase API**

```scala
// 读取HBase中指定rowkey数据并将结果集封装为DataFrame返回
val studentDF: DataFrame = this.fire.hbaseGetDF(hTableName, classOf[Student], getRDD)
// 将指定数据集分布式插入到指定HBase表中
this.fire.hbasePutDF(hTableName, studentDF, classOf[Student])
```

- **JDBC API**

```scala
// 将DataFrame中指定几列插入到关系型数据库中,每100条一插入
df.jdbcBatchUpdate(insertSql, Seq("name", "age", "createTime", "length", "sex"), batch = 100)
// 将查询结果通过反射映射到DataFrame中
val df: DataFrame = this.fire.jdbcQueryDF(querySql, Seq(1, 2, 3), classOf[Student])
```

### **3.3 灵活的配置方式**

  支持基于接口、apollo、配置文件以及注解等多种方式配置,支持将spark&flink等**引擎参数**、**fire框架参数**以及**用户自定义参数**混合配置,支持运行时动态修改配置。几种常用配置方式如下([配置手册](./docs/config.md)):

1. **基于配置文件:** 创建类名同名的properties文件进行参数配置
2. **基于接口配置:** fire框架提供了配置接口调用,通过接口获取所需的配置,可用于平台化的配置管理
3. **基于注解配置:** 通过注解的方式实现集群环境、connector、调优参数的配置,常用注解如下:

```scala
@Config(
  """
    |# 支持Flink调优参数、Fire框架参数、用户自定义参数等
    |state.checkpoints.num-retained=30
    |state.checkpoints.dir=hdfs:///user/flink/checkpoint
    |""")
@Hive("thrift://localhost:9083")
@Checkpoint(interval = 100, unaligned = true)
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
@RocketMQ(brokers = "bigdata_test", topics = "fire", groupId = "fire", tag = "*", startingOffset = "latest")
@Jdbc(url = "jdbc:mysql://mysql-server:3306/fire", username = "root", password = "..root726")
@HBase("localhost:2181")
```

**配置获取:**

  Fire框架封装了统一的配置获取api,基于该api,无论是spark还是flink,无论是在Driver | JobManager端还是在Executor | TaskManager端,都可以一行代码获取所需配置。这套配置获取api,无需再在flink的map等算子中复写open方法了,用起来十分方便。

```scala
this.conf.getString("my.conf")
this.conf.getInt("state.checkpoints.num-retained")
...
```

### **3.4 多集群支持**

  Fire框架的配置支持N多集群,比如同一个任务中可以同时配置多个HBase、Kafka数据源,使用不同的数值后缀即可区分(**keyNum**):

```scala
// 假设基于注解配置HBase多集群如下:
@HBase("localhost:2181")
@HBase2(cluster = "192.168.0.1:2181", storageLevel = "DISK_ONLY")

// 代码中使用对应的数值后缀进行区分
this.fire.hbasePutDF(hTableName, studentDF, classOf[Student])	// 默认keyNum=1,表示使用@HBase注解配置的集群信息
this.fire.hbasePutDF(hTableName2, studentDF, classOf[Student], keyNum=2)	// keyNum=2,表示使用@HBase2注解配置的集群信息
```

### **3.5 常用connector支持**

  支持kafka、rocketmq、redis、HBase、Jdbc、clickhouse、Hive、hudi、tidb、adb等常见的connector。

### **3.6 [checkpoint热修改](./docs/highlight/checkpoint.md)**

  支持运行时动态调整checkpoint周期、超时时间、并行checkpoint等参数,避免任务重启时由于反压带来的checkpoint压力。

### **3.7 [streaming热重启](./docs/highlight/spark-duration.md)**

  该功能是主要用于Spark Streaming任务,通过热重启技术,可以在不重启Spark Streaming的前提下,实现批次时间的热修改。比如在web端将某个任务的批次时间调整为10s,会立即生效。

### **3.8 配置热更新**

  用户仅需在web页面中更新指定的配置信息,就可以让实时任务接收到最新的配置并且立即生效。最典型的应用场景是进行Spark任务的某个算子partition数调整,比如当任务处理的数据量较大时,可以通过该功能将repartition的具体分区数调大,会立即生效。

### **3.9 在线性能诊断**

  深度集成Arthas,可对运行中的任务动态进行性能诊断。fire为arthas诊断提供rest接口,可通过接口调用的方式选择为driver、jobmanager或executor、taskmanager动态开启与关闭arthas诊断线程,然后向统一的arthas tunnel服务注册,即可在网页端输入arthas命令进行性能诊断。

![arthas-shell](docs/img/arthas-shell.png)

### **3.10 sql在线调试**

  Fire框架对外暴露了restful接口,平台等系统可通过接口调用的方式将待执行的sql语句动态传递给fire,由fire将sql提交到对应的引擎,并将sql执行结果通过接口调用的方式返回,实现实时任务sql开发的在线调试,避免重复修改代码发布执行带来的时间成本。

### **3.11 实时血缘**

  Fire框架支持运行时统计分析每个任务所使用到的数据源信息、库表信息、操作类型等,并将这些血缘信息通过接口的方式对外暴露。实时平台等web系统通过接口调用的方式即可获取到实时血缘信息。

### **3.12 定时调度**

  Fire框架内部封装了quartz框架,实现通过Scheduled注解即可完成定时任务的注册。

```scala
  /**
   * 声明了@Scheduled注解的方法是定时任务方法,会周期性执行
   *
   * @scope 默认同时在driver端和executor端执行,如果指定了driver,则只在driver端定时执行
   * @initialDelay 延迟多长时间开始执行第一次定时任务
   */
  @Scheduled(cron = "0/5 * * * * ?", scope = "driver", initialDelay = 60000)
  def loadTable: Unit = {
    this.logger.info("更新维表动作")
  }
```

### **3.13 平台无缝集成**

  Fire框架内置restful服务,并将许多功能通过接口的方式对外暴露,实时平台可以通过fire框架暴露的接口实现与每个实时任务的信息连接。

### **3.14 fire-shell**

  Fire框架整合spark shell与flink shell,支持通过REPL方式去动态调试spark和flink任务,并且支持fire框架的所有API。fire框架将shell能力通过接口方式暴露给实时平台,如此一来就可以通过web页面去调试spark和flink任务了。

## *[四、升级日志](./docs/feature.md)*

## 五、期待你的加入

**技术交流(钉钉群):*35373471***

<img src="./docs/img/dingding.jpeg" weight="500px" height="600px">


================================================
FILE: docs/accumulator.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# 累加器

Fire框架针对spark和flink的累加器进行了深度的定制,该api具有不需要事先声明累加器变量,可到处使用等优点。[示例代码](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/acc/FireAccTest.scala)

### 一、累加器的基本使用

```scala
// 消息接入
val dstream = this.fire.createKafkaDirectStream()
dstream.foreachRDD(rdd => {
    rdd.coalesce(this.conf.getInt(key, 10)).foreachPartition(t => {
        // 单值累加器
        this.acc.addCounter(1)
        // 多值累加器,根据key的不同分别进行数据的累加,以下两行代码表示分别对multiCounter
        // 和partitions这两个累加器进行累加
        this.acc.addMultiCounter("multiCounter", 1)
        this.acc.addMultiCounter("partitions", 1)
        // 多时间维度累加器,比多值累加器多了一个时间维度,
        // 如:hbaseWriter 2019-09-10 11:00:00  10
        // 如:hbaseWriter 2019-09-10 11:01:00  21
        this.acc.addMultiTimer("multiTimer", 1)
    })
})
```

### 二、累加器类型

1. 单值累加器

   单值累加器的特点是:只会将数据累加到同一个累加器中,全局唯一。

2. 多值累加器

   多值累加器的特点是:不同累加器实例使用不同的字符串key作为区分,相同的key的进行统一的累加,比单值累加器更强大。

3. 时间维度累加器

   时间维度累加器是在多值累加器的基础上进行了进一步的增强,引入了时间维度的概念。它以时间和累加器标识作为联合的累加器key。比如key为hbase_sink,那么统计的数据默认是按分钟进行,下一分钟是一个全新的累加窗口。时间维度累加器可以通过参数修改时间戳的格式,比如按分钟、小时、天、月、年等。

   ```scala
   // 多时间维度累加器,比多值累加器多了一个时间维度,
   // 如:hbaseWriter 2019-09-10 11:00:00  10
   // 如:hbaseWriter 2019-09-10 11:01:00  21
   this.acc.addMultiTimer("multiTimer", 1)
   // 指定时间戳,以小时作为统计窗口进行累加
   this.acc.addMultiTimer("multiTimer", 1, schema = "YYYY-MM-dd HH")
   ```

### 三、累加器值的获取

 1. 程序中获取

    ```scala
    /**
      * 获取累加器中的值
      */
    @Scheduled(fixedInterval = 60 * 1000)
    def printAcc: Unit = {
        this.acc.getMultiTimer.cellSet().foreach(t => println(s"key:" + t.getRowKey + "       时间:" + t.getColumnKey + " " + t.getValue + "条"))
        println("单值:" + this.acc.getCounter)
        this.acc.getMultiCounter.foreach(t => {
            println("多值:key=" + t._1 + " value=" + t._2)
        })
        val size = this.acc.getMultiTimer.cellSet().size()
        println(s"===multiTimer.size=${size}==log.size=${this.acc.getLog.size()}===")
    }
    ```

    

 2. 平台接口获取

    Fire框架针对累加器的获取提供了单独的接口,平台可以通过接口调用方式实时获取累加器的最新统计结果。

    | 接口地址             | 接口用途                         |
    | -------------------- | -------------------------------- |
    | /system/counter      | 用于获取累加器的值。             |
    | /system/multiCounter | 用于获取多值累加器的值。         |
    | /system/multiTimer   | 用于获取时间维度多值累加器的值。 |

================================================
FILE: docs/anno.md
================================================
# Fire框架--基于注解简化Flink和Spark开发

  从JDK5开始,Java提供了**注解**新特性,随后,注解如雨后春笋般被大量应用到各种开发框架中,其中,最具代表的是Spring。在注解出现以前,Spring的配置通常需要写到xml中。基于xml配置,有着十分繁琐,难以记忆,容易出错等弊端。Spring开发者也意识到了这个问题,于是开始引入大量的注解,让注解替代传统的xml配置。

  到了大数据时代,以Hadoop、Spark、Flink为代表的分布式计算引擎先后横空出世。相信很多从事过Java Web开发工作的人一开始都不太习惯这几款开发框架提供的API,这几款框架也不约而同的放弃注解特性。可能很多大数据开发者都或多或少的设想,将Spring集成到Spark或者Flink的代码工程中,但这么做实际是有很多问题的,因为Spring并不适用于大数据分布式计算场景。

  **那么,实时开发的Spring(春天)在哪里呢?就在Fire框架中!**为了最大化降低实时计算开发门槛,节约代码量,提高配置的便捷性,**中通快递大数据团队**自研了**Fire框架**。Fire框架在中通内部深耕多年,目前线上**数千个**任务均是基于Fire框架开发的。基于Fire框架的任务每天处理的数据量高达**几千亿规模**,顺利通过一年又一年的双十一大考。Fire框架**开源免费**,同时支持**Spark**与**Flink**两大主流引擎,有着简洁方便极易入门等优点,只需几分钟,即可学会。

## 一、快速入门案例

### 1.1 Flink开发示例

```scala
@Streaming(interval = 100, unaligned = true, parallelism = 4) // 100s做一次checkpoint,开启非对齐checkpoint
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object FlinkDemo extends FlinkStreaming {

  @Process
  def kafkaSource: Unit = {
    val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
    sql("""create table statement ...""")
    sql("""insert into statement ...""")
  }
}
```

### 1.2 Spark开发示例

```scala
@Config(
  """
    |# 支持Spark、Flink引擎调优参数、Fire框架参数、用户自定义参数等
    |spark.shuffle.compress=true
    |spark.ui.enabled=true
    |""")
@Hive("thrift://localhost:9083") // 配置连接到指定的hive
@Streaming(interval = 100, maxRatePerPartition = 100) // 100s一个Streaming batch,并限制消费速率
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object SparkDemo extends SparkStreaming {

  @Process
  def kafkaSource: Unit = {
    val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
    sql("""select * from xxx""").show()
  }
}
```

  通过上述两个代码示例可以看出,Fire框架为Spark和Flink开发提供了统一的编程风格,使用Fire提供的注解以及对应的父类,即可完成框架的集成。示例中的**@Config**注解支持多行配置,支持Spark或Flink的调优参数、Fire框架内置参数以及用户自定义参数。当任务执行起来时,Fire框架会根据这些配置信息,去初始化**SparkSession**或**ExecutionEnvironment**等上下文信息,避免了大量冗余的任务初始化代码。当然,@Config注解不是必须的,如果不指定,Fire框架会以默认最优的参数去初始化引擎上下文对象。**@Process**注解用于标记开发者代码逻辑的入口,标记该注解的方法会被FIre框架自动调用。mian方法也不是必须的,因为它被定义在了父类当中。

  **@Streaming**注解同时适用于Spark Streaming以及Flink两大计算引擎,**interval**用于设置Spark Streaming的批次时间,或者是Flink的**checkpoint**间隔时间。**parallelism**用于指定Flink任务的全局并行度,maxRatePerPartition则是用于配置Spark Streaming的消费速率。@Streaming注解还有很多功能,包括开启checkpoint超时时间、是否开启非对齐checkpoint、Spark Streaming允许同时执行的批次数等。

  **@Hive**注解用于指定hive metastore的url,适用于Spark和Flink。当指定了@Hive注解时,Fire框架在内部初始化时即可完成Spark与Flink创建hive catalog的动作。无需将hive-site.xml放到resources目录下,也不需要将hive相关的conf信息手动设置到SparkSession或ExecutionEnvironment中。

  **@Kafka**和**@RocketMQ**注解用于配置消费消息队列相关的信息,指定好以后,在代码中即可完成一行代码接入:

```scala
val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
val dStream = this.fire.createRocketMqPullStream()  // 使用api的方式消费rocketmq
```

### 1.3 SQL开发示例

  Fire框架对纯SQL开发也提供了很简洁的API,在开发中,将SQL语句(多条以分号分割)放到**sql()**方法中,然后在方法上标记**@Step**注解,即可被顺序执行。@Step注解中的中文描述,会被Fire框架自动打印到日志中,便于问题跟踪、异常排查等。当然,@Step注解和sql()方法一样可以应用到纯api开发的代码中,这些都是通用的。

```scala
@Streaming(interval = 60, parallelism = 2)
object JdbcDimDemo extends FlinkStreaming {

  @Step1("数据源定义")
  def ddl: Unit = {
    sql(
      """
        |CREATE TABLE t_mysql_dim (
        |  `id` BIGINT ...
        |) WITH ( ...
        |);
        |
        |CREATE TABLE t_kafka_fire (
        |  `id` BIGINT...
        |) WITH ( ...
        |)
        |""".stripMargin)
  }

  @Step2("kafka数据与mysql维表关联")
  def showJoin: Unit = {
    sql(
      """
        |select
        | xxx
        |from t_kafka_fire t1 left join t_mysql_dim  t2 on t1.id=t2.id
        |""".stripMargin).print()
  }
}

```

上述代码,Fire框架会根据代码中**@Step**注解的顺序,依次执行代码逻辑,并在日志中打印类似于下图的信息:

![anno_log](img/anno_log.png)

## 二、注解含义(Spark与Flink通用)

- **@Config:**该注解支持Flink、Spark引擎相关参数、Fire框架参数以及用户自定义参数。对于引擎相关配置信息,会在构建**SparkSession**或Flink **ExecutionEnvironment**时自动设置生效,避免编写大量重复的用于构建引擎上文的代码。
- **@Streaming:**该注解支持Flink的Checkpoint相关参数,包括频率、超时时间等,还可以进行任务并发度的配置。而对于Spark Streaming任务,则用于设置批次时间、是否开启反压,以及反压情况下消费速率等参数。
- **@Kafka:**该注解用于配置任务中使用到的kafka集群信息,以及kafka-client相关调优参数。如果任务中消费多个kafka,可以使用@Kafka2、@Kafka3这种写法。
- **@Hive:**该注解用于指定任务中所使用的hive数仓thrift server地址。支持HDFS HA,支持跨集群读写Hive。
- **@Process:**该注解用于标记用户代码的入口,标记了@Process的方法会被Fire框架自动调起。
- **@HBase**:用于配置HBase相关连接信息,一行代码完成HBase的读写。

```scala
// 假设基于注解配置HBase多集群如下:
@HBase("localhost:2181")
@HBase2(cluster = "192.168.0.1:2181", storageLevel = "DISK_ONLY")

// 代码中使用对应的数值后缀进行区分
this.fire.hbasePutDF(hTableName, studentDF, classOf[Student])	// 默认keyNum=1,表示使用@HBase注解配置的集群信息
this.fire.hbasePutDF(hTableName2, studentDF, classOf[Student], keyNum=2)	// keyNum=2,表示使用@HBase2注解配置的集群信息
```

- **@JDBC**:用于配置jdbc相关信息,Fire框架内部封装了数据库连接池,会自动获取该注解的配置信息。

```scala
@Jdbc(url = "jdbc:derby:memory:fire;create=true", username = "fire", password = "fire")
val insertSql = s"INSERT INTO $tableName (name, age, createTime, length, sex) VALUES (?, ?, ?, ?, ?)"
this.fire.jdbcUpdate(insertSql, Seq("admin", 12, timestamp, 10.0, 1))
```

- **@Scheduled**:用法类似于Sping,支持在Spark Streaming或Flink任务中执行周期性任务。

```scala
/**
  * 声明了@Scheduled注解的方法是定时任务方法,会周期性执行
  *
  * @scope 默认同时在driver端和executor端执行,如果指定了driver,则只在driver端定时执行
  * @initialDelay 延迟多长时间开始执行第一次定时任务
  */
@Scheduled(cron = "0/5 * * * * ?", scope = "driver", initialDelay = 60000)
def loadTable: Unit = {
  this.logger.info("更新维表动作")
}
```

- **@Before:**生命周期注解,用于在Fire框架初始化引擎上下文之前调用。
- **@After:**生命周期注解,用于在Fire退出jvm之前调用,可用于Spark批任务回收数据库连接池等对象。

## 三、参考文章:

- ### [Fire框架--快速的进行Spark与Flink开发](https://zhuanlan.zhihu.com/p/540808612)

- ### [Fire框架--Flink Checkpoint运行时动态调优](https://zhuanlan.zhihu.com/p/551394441)

- ### [Fire框架--Spark Streaming动态调整批次时间](https://zhuanlan.zhihu.com/p/552848864)

- ### [Fire框架--Flink参数调优与参数获取](https://zhuanlan.zhihu.com/p/543184683)

- ### [Fire框架--优雅的实现Flink定时任务](https://zhuanlan.zhihu.com/p/541358069)

## 四、Fire框架源码地址

- ### GitHub:https://github.com/ZTO-Express/fire

- ### Gitee:https://gitee.com/RS131419/fire

## 五、Fire框架社区交流群

### **技术交流(钉钉群):*35373471***

<img src="./img/dingding.jpeg" weight="500px" height="600px">

================================================
FILE: docs/connector/adb.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

## Flink adb connector

*Flink adb connector基于jdbc sql connector改造,使用方法同flink标准的jdbc sql connector,fire框架能根据jdbc url自动识别是mysql还是adb。*



================================================
FILE: docs/connector/clickhouse.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

### Flink clickhouse connector

#### 一、DDL

```scala
this.fire.sql(
"""
|CREATE TABLE t_user (
|    `id` BIGINT,
|    `name` STRING,
|    `age` INT,
|    `sex` STRING,
|    `score` DECIMAL,
|    `birthday` TIMESTAMP
|) WITH (
|    'connector' = 'clickhouse',
|    'url' = 'jdbc:clickhouse://node01:8123,node02:8123,node03:8123',
|    'database-name' = 'study',
|    'username' = 'fire',
|    'password' = 'fire',
|    'use-local' = 'true', -- 指定为true,当分布式表写入时写的是本地表
|    'table-name' = 't_student',
|    'sink.batch-size' = '10',
|    'sink.flush-interval' = '3',
|    'sink.max-retries' = '3'
|)
|""".stripMargin)
```

#### [二、完整示例](../fire-examples/flink-examples/src/main/scala/com/zto/fire/examples/flink/connector/clickhouse/ClickhouseTest.scala)



================================================
FILE: docs/connector/hbase.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# HBase 读写

  HBase对更新和点查具有很好的支持,在实时计算场景下也是应用十分广泛的。为了进一步简化HBase读写api,提高开发效率,fire框架对HBase API进行了深度封装。目前支持3种读写模式,分别是:Java API、Bulk API以及Spark提供的API。另外,fire框架支持在同一个任务中对任意多个hbase集群同时进行读写。

## 一、HBase集群配置

### 1.1 定义别名

建议将hbase集群url信息定义成别名,别名定义放到名为common.properties的配置文件中。别名的好处是一处维护到处生效,方便共用,便于记忆。

```properties
# 定义hbase集群连接信息别名为test,代码中hbase配置简化为:@HBase("test")
fire.hbase.cluster.map.test		=			zk01:2181,zk02:2181,zk03:2181
```

### 1.2 基于注解配置

```scala
@HBase("zk01:2181,zk02:2181,zk03:2181")
@HBase2(cluster = "test", scanPartitions = 3, storageLevel = "DISK_ONLY")
```

### 1.3 基于配置文件

```properties
# 方式一:直接指定zkurl
hbase.cluster									=				zkurl
# 方式二:事先定义好hbase别名与url的映射,然后通过别名配置,以下配置定义了别名test与url的映射关系
fire.hbase.cluster.map.test		=				zk01:2181,zk02:2181,zk03:2181
# 通过别名方式引用
hbase.cluster2								=				test
```

## 二、表与JavaBean映射

Fire框架通过Javabean与HBase表建立的关系简化读写api:

```java
/**
 * 对应HBase表的JavaBean
 *
 * @author ChengLong 2019-6-20 16:06:16
 */
@HConfig(multiVersion = true)
public class Student extends HBaseBaseBean<Student> {
    private Long id;
    private String name;
    private Integer age;
    // 多列族情况下需使用family单独指定
    private String createTime;
    // 若JavaBean的字段名称与HBase中的字段名称不一致,需使用value单独指定
    // 此时hbase中的列名为length1,而不是length
    @FieldName(family = "data", value = "length1")
    private BigDecimal length;
    private Boolean sex;

    /**
     * rowkey的构建
     *
     * @return
     */
    @Override
    public Student buildRowKey() {
        this.rowKey = this.id.toString();
        return this;
    }
}

```

  上述代码中定义了名为Student的Javabean,该Javabean需要继承自HBaseBaseBean,并实现buildRowKey方法,这个方法中需要告诉fire框架,rowKey是如何构建的。

  通过以上两步即可实现Javabean与HBase表的关系绑定。对于个性化需求,如果需要以多版本的方式进行读写,则需在类名上添加@HConfig(multiVersion = true)注解。如果Javabean中的列名与HBase中的字段名不一致,可以通过@FieldName(family = "data", value = "length1")进行单独指定,当然,列族也可以通过这个注解指定。如果不知道列族名称,则默认只有一个名为info的列族。

目前暂不支持scala语言的class以及case class,仅支持基本的字段数据类型,不支持嵌套的或者复杂的字段类型。

## 三、spark任务

### [1.1 java api](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/hbase/HBaseConnectorTest.scala)

```scala
/**
  * 使用HBaseConnector插入一个rdd的数据
  * rdd的类型必须为HBaseBaseBean的子类
  */
def testHbasePutRDD: Unit = {
    val studentList = Student.newStudentList()
    val studentRDD = this.fire.createRDD(studentList, 2)
    // 为空的字段不插入
    studentRDD.hbasePutRDD(this.tableName1)
}

/**
  * 使用HBaseConnector插入一个DataFrame的数据
  */
def testHBasePutDF: Unit = {
    val studentList = Student.newStudentList()
    val studentDF = this.fire.createDataFrame(studentList, classOf[Student])
    // 每个批次插100条
    studentDF.hbasePutDF(this.tableName1, classOf[Student])
}

/**
  * 使用HBaseConnector get数据,并将结果以RDD方式返回
  */
def testHbaseGetRDD: Unit = {
    val getList = Seq("1", "2", "3", "5", "6")
    val getRDD = this.fire.createRDD(getList, 2)
    // 以多版本方式get,并将结果集封装到rdd中返回
    val studentRDD = this.fire.hbaseGetRDD(this.tableName1, classOf[Student], getRDD)
    studentRDD.printEachPartition
}

/**
  * 使用HBaseConnector get数据,并将结果以DataFrame方式返回
  */
def testHbaseGetDF: Unit = {
    val getList = Seq("1", "2", "3", "4", "5", "6")
    val getRDD = this.fire.createRDD(getList, 3)
    // get到的结果以dataframe形式返回
    val studentDF = this.fire.hbaseGetDF(this.tableName1, classOf[Student], getRDD)
    studentDF.show(100, false)
}

/**
  * 使用HBaseConnector scan数据,并以RDD方式返回
  */
def testHbaseScanRDD: Unit = {
    val rdd = this.fire.hbaseScanRDD2(this.tableName1, classOf[Student], "1", "6")
    rdd.repartition(3).printEachPartition
}

/**
  * 使用HBaseConnector scan数据,并以DataFrame方式返回
  */
def testHbaseScanDF: Unit = {
    val dataFrame = this.fire.hbaseScanDF2(this.tableName1, classOf[Student], "1", "6")
    dataFrame.repartition(3).show(100, false)
}
```

### [1.2 bulk api](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/hbase/HbaseBulkTest.scala)

```scala
/**
  * 使用bulk的方式将rdd写入到hbase
  */
def testHbaseBulkPutRDD: Unit = {
    // 方式一:将rdd的数据写入到hbase中,rdd类型必须为HBaseBaseBean的子类
    val rdd = this.fire.createRDD(Student.newStudentList(), 2)
    // rdd.hbaseBulkPutRDD(this.tableName2)
    // 方式二:使用this.fire.hbaseBulkPut将rdd中的数据写入到hbase
    this.fire.hbaseBulkPutRDD(this.tableName2, rdd)

    // 第二个参数指定false表示不插入为null的字段到hbase中
    // rdd.hbaseBulkPutRDD(this.tableName2, insertEmpty = false)
    // 第三个参数为true表示以多版本json格式写入
    // rdd.hbaseBulkPutRDD(this.tableName3, false, true)
}

/**
  * 使用bulk的方式将DataFrame写入到hbase
  */
def testHbaseBulkPutDF: Unit = {
    // 方式一:将DataFrame的数据写入到hbase中
    val rdd = this.fire.createRDD(Student.newStudentList(), 2)
    val studentDF = this.fire.createDataFrame(rdd, classOf[Student])
    // insertEmpty=false表示为空的字段不插入
    studentDF.hbaseBulkPutDF(this.tableName1, classOf[Student], keyNum = 2)
    // 方式二:
    // this.fire.hbaseBulkPutDF(this.tableName2, studentDF, classOf[Student])
}

/**
  * 使用bulk方式根据rowKey获取数据,并将结果集以RDD形式返回
  */
def testHBaseBulkGetRDD: Unit = {
    // 方式一:使用rowKey读取hbase中的数据,rowKeyRdd类型为String
    val rowKeyRdd = this.fire.createRDD(Seq(1.toString, 2.toString, 3.toString, 5.toString, 6.toString), 2)
    val studentRDD = rowKeyRdd.hbaseBulkGetRDD(this.tableName1, classOf[Student], keyNum = 2)
    studentRDD.foreach(println)
    // 方式二:使用this.fire.hbaseBulkGetRDD
    // val studentRDD2 = this.fire.hbaseBulkGetRDD(this.tableName2, rowKeyRdd, classOf[Student])
    // studentRDD2.foreach(println)
}

/**
  * 使用bulk方式根据rowKey获取数据,并将结果集以DataFrame形式返回
  */
def testHBaseBulkGetDF: Unit = {
    // 方式一:使用rowKey读取hbase中的数据,rowKeyRdd类型为String
    val rowKeyRdd = this.fire.createRDD(Seq(1.toString, 2.toString, 3.toString, 5.toString, 6.toString), 2)
    val studentDF = rowKeyRdd.hbaseBulkGetDF(this.tableName2, classOf[Student])
    studentDF.show(100, false)
    // 方式二:使用this.fire.hbaseBulkGetDF
    val studentDF2 = this.fire.hbaseBulkGetDF(this.tableName2, rowKeyRdd, classOf[Student])
    studentDF2.show(100, false)
}

/**
  * 使用bulk方式进行scan,并将结果集映射为RDD
  */
def testHbaseBulkScanRDD: Unit = {
    // scan操作,指定rowKey的起止或直接传入自己构建的scan对象实例,返回类型为RDD[Student]
    val scanRDD = this.fire.hbaseBulkScanRDD2(this.tableName2, classOf[Student], "1", "6")
    scanRDD.foreach(println)
}

/**
  * 使用bulk方式进行scan,并将结果集映射为DataFrame
  */
def testHbaseBulkScanDF: Unit = {
    // scan操作,指定rowKey的起止或直接传入自己构建的scan对象实例,返回类型为DataFrame
    val scanDF = this.fire.hbaseBulkScanDF2(this.tableName2, classOf[Student], "1", "6")
    scanDF.show(100, false)
}
```

### [1.3 spark api](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/hbase/HBaseHadoopTest.scala)

```scala
/**
  * 基于saveAsNewAPIHadoopDataset封装,将rdd数据保存到hbase中
  */
def testHbaseHadoopPutRDD: Unit = {
    val studentRDD = this.fire.createRDD(Student.newStudentList(), 2)
    this.fire.hbaseHadoopPutRDD(this.tableName2, studentRDD, keyNum = 2)
    // 方式二:直接基于rdd进行方法调用
    // studentRDD.hbaseHadoopPutRDD(this.tableName1)
}

/**
  * 基于saveAsNewAPIHadoopDataset封装,将DataFrame数据保存到hbase中
  */
def testHbaseHadoopPutDF: Unit = {
    val studentRDD = this.fire.createRDD(Student.newStudentList(), 2)
    val studentDF = this.fire.createDataFrame(studentRDD, classOf[Student])
    // 由于DataFrame相较于Dataset和RDD是弱类型的数据集合,所以需要传递具体的类型classOf[Type]
    this.fire.hbaseHadoopPutDF(this.tableName3, studentDF, classOf[Student])
    // 方式二:基于DataFrame进行方法调用
    // studentDF.hbaseHadoopPutDF(this.tableName3, classOf[Student])
}

/**
  * 使用Spark的方式scan海量数据,并将结果集映射为RDD
  */
def testHBaseHadoopScanRDD: Unit = {
    val studentRDD = this.fire.hbaseHadoopScanRDD2(this.tableName2, classOf[Student], "1", "6", keyNum = 2)
    studentRDD.printEachPartition
}

/**
  * 使用Spark的方式scan海量数据,并将结果集映射为DataFrame
  */
def testHBaseHadoopScanDF: Unit = {
    val studentDF = this.fire.hbaseHadoopScanDF2(this.tableName3, classOf[Student], "1", "6")
    studentDF.show(100, false)
}
```

## 四、flink任务

*[样例代码:](../fire-examples/flink-examples/src/main/scala/com/zto/fire/examples/flink/stream/HBaseTest.scala)*

```scala
/**
  * table的hbase sink
  */
def testTableHBaseSink(stream: DataStream[Student]): Unit = {
    stream.createOrReplaceTempView("student")
    val table = this.flink.sqlQuery("select id, name, age from student group by id, name, age")
    // 方式一、自动将row转为对应的JavaBean
    // 注意:table对象上调用hbase api,需要指定泛型
    table.hbasePutTable[Student](this.tableName).setParallelism(1)
    this.fire.hbasePutTable[Student](table, this.tableName2, keyNum = 2)

    // 方式二、用户自定义取数规则,从row中创建HBaseBaseBean的子类
    table.hbasePutTable2[Student](this.tableName3)(row => new Student(1L, row.getField(1).toString, row.getField(2).toString.toInt))
    // 或者
    this.fire.hbasePutTable2[Student](table, this.tableName5, keyNum = 2)(row => new Student(1L, row.getField(1).toString, row.getField(2).toString.toInt))
}

/**
  * table的hbase sink
  */
def testTableHBaseSink2(stream: DataStream[Student]): Unit = {
    val table = this.fire.sqlQuery("select id, name, age from student group by id, name, age")

    // 方式二、用户自定义取数规则,从row中创建HBaseBaseBean的子类
    table.hbasePutTable2(this.tableName6)(row => new Student(1L, row.getField(1).toString, row.getField(2).toString.toInt))
    // 或者
    this.flink.hbasePutTable2(table, this.tableName7, keyNum = 2)(row => new Student(1L, row.getField(1).toString, 		   row.getField(2).toString.toInt))
}

/**
  * stream hbase sink
  */
def testStreamHBaseSink(stream: DataStream[Student]): Unit = {
    // 方式一、DataStream中的数据类型为HBaseBaseBean的子类
    // stream.hbasePutDS(this.tableName)
    this.fire.hbasePutDS[Student](stream, this.tableName8)

    // 方式二、将value组装为HBaseBaseBean的子类,逻辑用户自定义
    stream.hbasePutDS2(this.tableName9, keyNum = 2)(value => value)
    // 或者
    this.fire.hbasePutDS2(stream, this.tableName10)(value => value)
}

/**
  * stream hbase sink
  */
def testStreamHBaseSink2(stream: DataStream[Student]): Unit = {
    // 方式二、将value组装为HBaseBaseBean的子类,逻辑用户自定义
    stream.hbasePutDS2(this.tableName11)(value => value)
    // 或者
    this.fire.hbasePutDS2(stream, this.tableName12, keyNum = 2)(value => value)
}

/**
  * hbase的基本操作
  */
def testHBase: Unit = {
    // get操作
    val getList = ListBuffer(HBaseConnector.buildGet("1"))
    val student = HBaseConnector.get(this.tableName, classOf[Student], getList, 1)
    if (student != null) println(JSONUtils.toJSONString(student))
    // scan操作
    val studentList = HBaseConnector.scan(this.tableName, classOf[Student], HBaseConnector.buildScan("0", "9"), 1)
    if (studentList != null) println(JSONUtils.toJSONString(studentList))
    // delete操作
    HBaseConnector.deleteRows(this.tableName, Seq("1"))
}
```

## 五、多集群读写

Fire框架支持同一个任务中对任意多个hbase集群进行读写,首先要在配置文件中以keyNum进行指定要连接的所有hbase集群的zk地址:

```scala
@HBase("zk01:2181")
@HBase2("zk02:2181")
@HBase3("zk03:2181")
```

```properties
hbase.cluster=zk01:2181
hbase.cluster3=zk02:2181
hbase.cluster8=zk03:2181
```

在代码中,通过keyNum参数告诉fire这行代码连接的hbase集群是哪个。注意:api中的keyNum要与配置中的数字对应上。

```scala
// insert 操作
studentRDD.hbasePutRDD(this.tableName1)
studentRDD.hbasePutRDD(this.tableName2, keyNum = 3)
studentRDD.hbasePutRDD(this.tableName3, keyNum = 8)
// scan 操作
this.fire.hbaseScanDF2(this.tableName1, classOf[Student], "1", "6")
this.fire.hbaseScanDF2(this.tableName1, classOf[Student], "1", "6", keyNum = 3)
```

## 六、@HBase

```java
/**
 * HBase集群连接信息:hbase.cluster
 */
String value() default "";

/**
 * HBase集群连接信息:hbase.cluster,同value
 */
String cluster() default "";

/**
 * 列族名称:hbase.column.family
 */
String family() default "";

/**
 * 每个线程最多insert的记录数:fire.hbase.batch.size
 */
int batchSize() default -1;

/**
 * spark引擎:scan hbase后存放到rdd的多少个partition中:fire.hbase.scan.partitions
 */
int scanPartitions() default -1;

/**
 * spark引擎:scan后的缓存级别:fire.hbase.storage.level
 */
String storageLevel() default "";

/**
 * flink引擎:sink hbase失败最大重试次数:hbase.max.retry
 */
int maxRetries() default -1;

/**
 * WAL等级:hbase.durability
 */
String durability() default "";

/**
 * 是否启用表信息缓存,提高表是否存在判断的效率:fire.hbase.table.exists.cache.enable
 */
boolean tableMetaCache() default true;

/**
 * hbase-client参数,以key=value形式注明
 */
String[] config() default "";
```

## 七、hbase-client参数

hbase-client参数,可以通过@HBase的**config**或以**fire.hbase.conf.**为前缀的参数去指定:

```scala
@HBase(cluster = "test", config = Array[String]("hbase.rpc.timeout=60000ms", "hbase.client.scanner.timeout.period=60000ms"))
```

```properties
fire.hbase.conf.hbase.rpc.timeout										=				60000ms
fire.hbase.conf.hbase.client.scanner.timeout.period	=				60000ms
```

| 参数名称                                    | 引擎  | 含义                                                   |
| ------------------------------------------- | ----- | ------------------------------------------------------ |
| fire.hbase.batch.size                       | 通用  | insert的批次大小,用于限制单个task一次最多sink的记录数 |
| hbase.column.family                         | 通用  | 用于配置列族名称,默认info                             |
| hbase.max.retry                             | flink | 当插入失败后,重试多少次                               |
| hbase.cluster                               | 通用  | 所需读写的Hbase集群url或别名                           |
| hbase.durability                            | 通用  | Hbase-client中的durability                             |
| fire.hbase.storage.level                    | spark | 诊断scan后数据的缓存,避免重复scan hbase               |
| fire.hbase.scan.partitions                  | Spark | 通过HBase scan后repartition的分区数                    |
| fire.hbase.cluster.map.                     | 通用  | hbase集群映射配置前缀                                  |
| fire.hbase.table.exists.cache.enable        | 通用  | 是否开启HBase表存在判断的缓存                          |
| fire.hbase.table.exists.cache.reload.enable | 通用  | 是否开启HBase表存在列表缓存的定时更新任务              |
| fire.hbase.table.exists.cache.initialDelay  | 通用  | 定时刷新缓存HBase表任务的初始延迟                      |
| fire.hbase.table.exists.cache.period        | 通用  | 定时刷新缓存HBase表任务的执行频率                      |
| fire.hbase.conf.                            | 通用  | hbase java api 配置前缀,支持任意hbase-client的参数    |



================================================
FILE: docs/connector/hive.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# hive集成与配置

使用fire框架,仅需一行配置即可实现spark&flink与hive的无缝读写,甚至支持跨集群的读写(实时任务与hive不再同一个集群中)。

### 一、基于注解

```scala
// 指定thrift server url地址,多个以逗号分隔
@Hive("thrift://thrift01:9083")
// 指定thrift对应的别名,别名配置方式同kafka别名配置
@Hive("test")
```

### 二、配置文件

```properties
# 方式一:直接指定hive的thrift server地址,多个以逗号分隔
spark.hive.cluster																												=				thrift://hive01:9083,thrift://hive02:9083

# 方式二(推荐):如果已经通过fire.hive.cluster.map.xxx指定了别名,则可以直接使用别名
# 公共信息特别是集群信息建议放到commons.properties中
fire.hive.cluster.map.batch																								=				thrift://hive03:9083,thrift://hive04:9083
# batch是上述url的hive别名,支持约定多个hive集群的别名
spark.hive.cluster																												=				batch
```

### 三、[示例代码](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/hive/HiveClusterReader.scala)

```scala
// 通过上述配置,代码中就可以直接通过以下方式连接指定的hive
this.fire.sql("select * from hive.tableName").show
```

### 四、高可用

  NameNode主备切换会导致那些读写hive的spark streaming任务挂掉。为了提高灵活性,避免将core-site.xml与hdfs-site.xml放到工程的resources目录下,fire提供了配置的方式,将Name Node HA信息通过配置文件进行指定。每项配置中的batch对应fire.hive.cluster.map.batch所指定的别名:batch,其他信息根据集群不同进行单独配置。如果有多个hive集群,可以配置多套HA配置。

```properties
# 用于是否启用HDFS HA
spark.hdfs.ha.enable																												=				true
# 离线hive集群的HDFS HA配置项,规则为统一的ha前缀:spark.hdfs.ha.conf.+hive.cluster名称+hdfs专门的ha配置
spark.hdfs.ha.conf.batch.fs.defaultFS																				=				hdfs://nameservice1
spark.hdfs.ha.conf.batch.dfs.nameservices																		=				nameservice1
spark.hdfs.ha.conf.batch.dfs.ha.namenodes.nameservice1											=				namenode5231,namenode5229
spark.hdfs.ha.conf.batch.dfs.namenode.rpc-address.nameservice1.namenode5231	=				namenode01:8020
spark.hdfs.ha.conf.batch.dfs.namenode.rpc-address.nameservice1.namenode5229	=				namenode02:8020
spark.hdfs.ha.conf.batch.dfs.client.failover.proxy.provider.nameservice1		=       org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
```

### 五、hive set参数

```properties
# 以spark.hive.conf.为前缀的配置将直接生效,比如开启hive动态分区
# 原理是被直接执行:this.fire.sql("set hive.exec.dynamic.partition=true")
spark.hive.conf.hive.exec.dynamic.partition																	=				true
spark.hive.conf.hive.exec.dynamic.partition.mode														=				nonstrict
spark.hive.conf.hive.exec.max.dynamic.partitions														=				5000
```

### 六、@Hive注解

```java
/**
 * hive连接别名:hive.cluster
 */
String value() default "";

/**
 * hive连接别名:hive.cluster,同value
 */
String cluster() default "";

/**
 * hive的版本:hive.version
 */
String version() default "";

/**
 * 在flink中hive的catalog名称:hive.catalog.name
 */
String catalog() default "";

/**
 * 分区名称(dt、ds):default.table.partition.name
 */
String partition() default "";
```

### 七、个性化配置

| 参数名称                           | 引擎  | 含义                               |
| ---------------------------------- | ----- | ---------------------------------- |
| flink.hive.version                 | flink | flink所集成的hive版本号            |
| flink.default.database.name        | flink | tmp                                |
| flink.default.table.partition.name | flink | 默认的hive分区字段名称             |
| flink.hive.catalog.name            | flink | hive的catalog名称                  |
| fire.hive.cluster.map.             | 通用  | hive thrift url别名映射            |
| hive.conf.                         | 通用  | 通过固定的前缀配置支持所有hive参数 |



================================================
FILE: docs/connector/jdbc.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# JDBC读写

  实时任务开发中,对jdbc读写的需求很高。为了简化jdbc开发步骤,fire框架对jdbc操作做了进一步封装,将许多常见操作简化成一行代码。另外,fire框架支持在同一个任务中对任意多个数据源进行读写。

### 一、数据源配置

#### 1.1 基于注解

```scala
@Jdbc(url = "jdbc:derby:memory:fire;create=true", username = "fire", password = "fire")
@Jdbc3(url = "jdbc:derby:memory:fire;create=true", username = "fire", maxPoolSize=3, config=Array[String]("c3p0.key=value"))
```

#### 1.2 基于配置文件

  数据源包括jdbc的url、driver、username与password等重要信息,建议将这些配置放到commons.properties中,避免每个任务单独配置。fire框架内置了c3p0数据库连接池,在分布式场景下,限制每个container默认最多3个connection,避免申请过多资源时申请太多的数据库连接。

```properties
db.jdbc.url                  =       jdbc:derby:memory:fire;create=true
db.jdbc.driver               =       org.apache.derby.jdbc.EmbeddedDriver
db.jdbc.maxPoolSize          =       3
db.jdbc.user                 =       fire
db.jdbc.password             =       fire

# 如果需要多个数据源,则可在每项配置的结尾添加对应的keyNum作为区分
db.jdbc.url2                 =       jdbc:mysql://mysql:3306/fire
db.jdbc.driver2              =       com.mysql.jdbc.Driver
db.jdbc.user2                =       fire
db.jdbc.password2            =       fire
```

### 二、API使用

#### [2.1 spark任务](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/jdbc/JdbcTest.scala)

```scala
/**
   * 使用jdbc方式对关系型数据库进行增删改操作
   */
def testJdbcUpdate: Unit = {
    val timestamp = DateFormatUtils.formatCurrentDateTime()
    // 执行insert操作
    val insertSql = s"INSERT INTO $tableName (name, age, createTime, length, sex) VALUES (?, ?, ?, ?, ?)"
    this.fire.jdbcUpdate(insertSql, Seq("admin", 12, timestamp, 10.0, 1))
    // 更新配置文件中指定的第二个关系型数据库
    this.fire.jdbcUpdate(insertSql, Seq("admin", 12, timestamp, 10.0, 1), keyNum = 2)

    // 执行更新操作
    val updateSql = s"UPDATE $tableName SET name=? WHERE id=?"
    this.fire.jdbcUpdate(updateSql, Seq("root", 1))

    // 执行批量操作
    val batchSql = s"INSERT INTO $tableName (name, age, createTime, length, sex) VALUES (?, ?, ?, ?, ?)"

    this.fire.jdbcBatchUpdate(batchSql, Seq(Seq("spark1", 21, timestamp, 100.123, 1),
                                            Seq("flink2", 22, timestamp, 12.236, 0),
                                            Seq("flink3", 22, timestamp, 12.236, 0),
                                            Seq("flink4", 22, timestamp, 12.236, 0),
                                            Seq("flink5", 27, timestamp, 17.236, 0)))

    // 执行批量更新
    this.fire.jdbcBatchUpdate(s"update $tableName set sex=? where id=?", Seq(Seq(1, 1), Seq(2, 2), Seq(3, 3), Seq(4, 4), Seq(5, 5), Seq(6, 6)))

    // 方式一:通过this.fire方式执行delete操作
    val sql = s"DELETE FROM $tableName WHERE id=?"
    this.fire.jdbcUpdate(sql, Seq(2))
    // 方式二:通过JdbcConnector.executeUpdate

    // 同一个事务
    /*val connection = this.jdbc.getConnection()
    this.fire.jdbcBatchUpdate("insert", connection = connection, commit = false, closeConnection = false)
    this.fire.jdbcBatchUpdate("delete", connection = connection, commit = false, closeConnection = false)
    this.fire.jdbcBatchUpdate("update", connection = connection, commit = true, closeConnection = true)*/
}

  /**
   * 将DataFrame数据写入到关系型数据库中
   */
  def testDataFrameSave: Unit = {
    val df = this.fire.createDataFrame(Student.newStudentList(), classOf[Student])

    val insertSql = s"INSERT INTO spark_test(name, age, createTime, length, sex) VALUES (?, ?, ?, ?, ?)"
    // 指定部分DataFrame列名作为参数,顺序要对应sql中问号占位符的顺序,batch用于指定批次大小,默认取spark.db.jdbc.batch.size配置的值
    df.jdbcBatchUpdate(insertSql, Seq("name", "age", "createTime", "length", "sex"), batch = 100)

    df.createOrReplaceTempViewCache("student")
    val sqlDF = this.fire.sql("select name, age, createTime from student where id>=1").repartition(1)
    // 若不指定字段,则默认传入当前DataFrame所有列,且列的顺序与sql中问号占位符顺序一致
    sqlDF.jdbcBatchUpdate("insert into spark_test(name, age, createTime) values(?, ?, ?)")
    // 等同以上方式
    // this.fire.jdbcBatchUpdateDF(sqlDF, "insert into spark_test(name, age, createTime) values(?, ?, ?)")
  }
```

#### [2.2 flink任务](../fire-examples/flink-examples/src/main/scala/com/zto/fire/examples/flink/stream/JdbcTest.scala)

```scala
/**
   * table的jdbc sink
   */
def testTableJdbcSink(stream: DataStream[Student]): Unit = {
    stream.createOrReplaceTempView("student")
    val table = this.fire.sqlQuery("select name, age, createTime, length, sex from student group by name, age, createTime, length, sex")

    // 方式一、table中的列顺序和类型需与jdbc sql中的占位符顺序保持一致
    table.jdbcBatchUpdate(sql(this.tableName)).setParallelism(1)
    // 或者
    this.fire.jdbcBatchUpdateTable(table, sql(this.tableName), keyNum = 6).setParallelism(1)

    // 方式二、自定义row取数规则,适用于row中的列个数和顺序与sql占位符不一致的情况
    table.jdbcBatchUpdate2(sql(this.tableName), flushInterval = 10000, keyNum = 7)(row => {
        Seq(row.getField(0), row.getField(1), row.getField(2), row.getField(3), row.getField(4))
    })
    // 或者
    this.flink.jdbcBatchUpdateTable2(table, sql(this.tableName), keyNum = 8)(row => {
        Seq(row.getField(0), row.getField(1), row.getField(2), row.getField(3), row.getField(4))
    }).setParallelism(1)
}

/**
   * stream jdbc sink
   */
def testStreamJdbcSink(stream: DataStream[Student]): Unit = {
    // 方式一、指定字段列表,内部根据反射,自动获取DataStream中的数据并填充到sql中的占位符
    // 此处fields有两层含义:1. sql中的字段顺序(对应表) 2. DataStream中的JavaBean字段数据(对应JavaBean)
    // 注:要保证DataStream中字段名称是JavaBean的名称,非表中字段名称 顺序要与占位符顺序一致,个数也要一致
    stream.jdbcBatchUpdate(sql(this.tableName2), fields).setParallelism(3)
    // 或者
    this.fire.jdbcBatchUpdateStream(stream, sql(this.tableName2), fields, keyNum = 6).setParallelism(1)

    // 方式二、通过用户指定的匿名函数方式进行数据的组装,适用于上面方法无法反射获取值的情况,适用面更广
    stream.jdbcBatchUpdate2(sql(this.tableName2), 3, 30000, keyNum = 7) {
        // 在此处指定取数逻辑,定义如何将dstream中每列数据映射到sql中的占位符
        value => Seq(value.getName, value.getAge, DateFormatUtils.formatCurrentDateTime(), value.getLength, value.getSex)
    }.setParallelism(1)

    // 或者
    this.flink.jdbcBatchUpdateStream2(stream, sql(this.tableName2), keyNum = 8) {
        value => Seq(value.getName, value.getAge, DateFormatUtils.formatCurrentDateTime(), value.getLength, value.getSex)
    }.setParallelism(2)
}
```

### 三、多个数据源读写

Fire框架支持同一个任务中读写任意个数的数据源,只需要通过keyNum指定即可。配置和使用方式可以参考:HBase、kafka等。

### 四、@JDBC

```java

/**
 * Jdbc的url,同value
 */
String url();

/**
 * jdbc 驱动类,不填可根据url自动推断
 */
String driver() default "";

/**
 * jdbc的用户名
 */
String username();

/**
 * jdbc的密码
 */
String password() default "";

/**
 * 事务的隔离级别
 */
String isolationLevel() default "";

/**
 * 连接池的最大连接数
 */
int maxPoolSize() default -1;

/**
 * 连接池最少连接数
 */
int minPoolSize() default -1;

/**
 * 连接池初始连接数
 */
int initialPoolSize() default -1;

/**
 * 连接池的增量
 */
int acquireIncrement() default -1;

/**
 * 连接的最大空闲时间
 */
int maxIdleTime() default -1;

/**
 * 多少条操作一次
 */
int batchSize() default -1;

/**
 * flink引擎:flush的间隔周期(ms)
 */
long flushInterval() default -1;

/**
 * flink引擎:失败最大重试次数
 */
int maxRetries() default -1;

/**
 * spark引擎:scan后的缓存级别:fire.jdbc.storage.level
 */
String storageLevel() default "";

/**
 * spark引擎:select后存放到rdd的多少个partition中:fire.jdbc.query.partitions
 */
int queryPartitions() default -1;

/**
 * 日志中打印的sql长度
 */
int logSqlLength() default -1;

/**
 * c3p0参数,以key=value形式注明
 */
String[] config() default "";
```

### 五、配置参数

列表中的配置参数可根据需要放到任务的配置文件中。

| 参数名称                 | 引擎 | 含义                                     |
| ------------------------ | ---- | ---------------------------------------- |
| db.jdbc.url              | 通用 | jdbc url                                 |
| db.jdbc.url.map.         | 通用 | 用于为url取别名                          |
| db.jdbc.driver           | 通用 | driver class                             |
| db.jdbc.user             | 通用 | 数据库用户名                             |
| db.jdbc.password         | 通用 | 数据库密码                               |
| db.jdbc.isolation.level  | 通用 | 事务的隔离级别                           |
| db.jdbc.maxPoolSize      | 通用 | 连接池最大连接数                         |
| db.jdbc.minPoolSize      | 通用 | 连接池最小连接数                         |
| db.jdbc.acquireIncrement | 通用 | 当连接池连接数不足时,增量申请连接数大小 |



================================================
FILE: docs/connector/kafka.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# Kafka 数据源

### 一、API使用

使用fire框架可以很方便的消费kafka中的数据,并且支持在同一任务中消费多个kafka集群的多个topic。核心代码仅一行:

```scala
// Spark Streaming任务
val dstream = this.fire.createKafkaDirectStream()
// structured streaming任务
val kafkaDataset = this.fire.loadKafkaParseJson()
// flink 任务
val dstream = this.fire.createKafkaDirectStream()
```

以上的api均支持kafka相关参数的传入,但fire推荐将这些集群信息放到配置文件中,增强代码可读性,提高代码简洁性与灵活性。

### 二、kafka配置

  你可能会疑惑,kafka的broker与topic信息并没有在代码中指定,程序是如何消费的呢?其实,这些信息都放到了任务同名的配置文件中。当然,你可以选择将这些kafka信息放到代码中指定。如果代码中指定了集群信息,同时配置文件中也有指定,则配置文件的优先级更高。

#### 2.1 定义别名

  建议将kafka集群url信息定义成别名,别名定义放到名为common.properties的配置文件中。别名的好处是一处维护到处生效,方便共用,便于记忆。

```properties
# 以下定义了两个kafka集群的别名,分别叫mq和test,别名与定义的url对应
fire.kafka.cluster.map.mq					=				kafka01:9092,kafka02:9092,kafka03:9092
fire.kafka.cluster.map.test				=				kafka-test01:9092,kafka-test02:9092,kafka-test03:9092
```

#### 2.2 基于注解配置

定义好别名以后,就可以使用注解的方式去配置kafka集群信息了,fire框架支持一个任务读写多个kafka:

```scala
@Kafka(brokers = "mq", topics = "fire", groupId = "fire")
@Kafka2(brokers = "test", topics = "fire", groupId = "fire", sessionTimeout = 600000, autoCommit = false)
```

#### 2.3 基于配置文件配置

```properties
spark.kafka.brokers.name           =       mq
# 必须配置项:kafka的topic列表,以逗号分隔
spark.kafka.topics                 =       fire
# 用于指定groupId,如果不指定,则默认为当前类名
spark.kafka.group.id               =       fire

# 配置消费明为test的kafka机器,注意key的后缀统一添加2,用于标识不同的kafka集群
spark.kafka.brokers.name2          =       test
# 必须配置项:kafka的topic列表,以逗号分隔
spark.kafka.topics2                =       fire
# 用于指定groupId,如果不指定,则默认为当前类名
spark.kafka.group.id2              =       fire
```

### 三、多kafka多topic消费

代码中是如何关联带有数字后缀的key的呢?答案是通过keyNum参数来指定:

```scala
// 对应spark.kafka.brokers.name=mq 或 @Kafka这个kafka("mq")集群,如果不知道keyNum,默认为1
val dstream = this.fire.createKafkaDirectStream()
// 对应spark.kafka.brokers.name2=test 或 @Kafka2("test")这个kafka集群
val dstream2 = this.fire.createKafkaDirectStream(keyNum=2)
```

### 三、offset提交

#### 3.1 主动提交

```scala
dstream.kafkaCommitOffsets()
```

#### 3.2 自动提交

  Spark streaming在处理数据过程中,由于offset提交与数据处理可能不再一个算子中,就会出现stage失败,数据丢失,但offset却提交了。为了解决这个问题,fire框架提供了***foreachRDDAtLeastOnce***算子,来保证计算的数据不丢,失败重试(默认3次),成功自动提交等特性。

```scala
@Streaming(20) // spark streaming的批次时间
@Kafka(brokers = "bigdata_test", topics = "fire", groupId = "fire")
// 以上注解支持别名或url两种方式如:@Hive(thrift://hive:9083),别名映射需配置到cluster.properties中
object AtLeastOnceTest extends BaseSparkStreaming {

  override def process: Unit = {
    val dstream = this.fire.createKafkaDirectStream()

    // 至少一次的语义保证,处理成功自动提交offset,处理失败会重试指定次数,如果仍失败则任务退出
    dstream.foreachRDDAtLeastOnce(rdd => {
      val studentRDD = rdd.map(t => JSONUtils.parseObject[Student](t.value())).repartition(2)
      val insertSql = s"INSERT INTO spark_test(name, age, createTime, length, sex) VALUES (?, ?, ?, ?, ?)"
      println("kafka.brokers.name=>" + this.conf.getString("kafka.brokers.name"))
      studentRDD.toDF().jdbcBatchUpdate(insertSql, Seq("name", "age", "createTime", "length", "sex"), batch = 1)
    })

    this.fire.start
  }
}
```

### 五、kafka-client参数调优

针对与kafka-client个性化的参数,需要使用config来进行配置:

```scala
@Kafka(brokers = "kafka01:9092", config = Array[String]("session.timeout.ms=30000", "request.timeout.ms=30000"))
```

基于配置文件的话使用kafka.conf开头加上kafka-client参数即可:

```properties
# 以kafka.conf开头的配置支持所有kafka client的配置
kafka.conf.session.timeout.ms   =     300000
kafka.conf.request.timeout.ms   =     400000
kafka.conf.session.timeout.ms2  =     300000
```

### 六、代码示例

[1. spark消费kafka demo](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/streaming/KafkaTest.scala)

[2. flink消费kafka demo](../fire-examples/flink-examples/src/main/scala/com/zto/fire/examples/flink/stream/HBaseTest.scala)

### 七、@Kafka注解

```java
/**
 * kafka集群连接信息,同value
 */
String brokers();

/**
 * kafka topics,多个使用逗号分隔
 */
String topics();

/**
 * 消费者标识
 */
String groupId();

/**
 * 指定从何处开始消费
 */
String startingOffset() default "";

/**
 * 指定消费到何处结束
 */
String endingOffsets() default "";

/**
 * 是否开启主动提交offset
 */
boolean autoCommit() default false;

/**
 * session超时时间(ms)
 */
long sessionTimeout() default -1;

/**
 * request超时时间(ms)
 */
long requestTimeout() default -1;

/**
 * poll的周期(ms)
 */
long pollInterval() default -1;

/**
 * 从指定的时间戳开始消费
 */
long startFromTimestamp() default -1;

/**
 * 指定从kafka中保持的offset开始继续消费
 */
boolean startFromGroupOffsets() default false;

/**
 * 是否强制覆盖checkpoint中保持的offset信息,从指定位置开始消费
 */
boolean forceOverwriteStateOffset() default false;

/**
 * 是否在开启checkpoint的情况下强制周期性提交offset到kafka
 */
boolean forceAutoCommit() default false;

/**
 * 强制提交的周期(ms)
 */
long forceAutoCommitInterval() default -1;

/**
 * kafka-client参数,以key=value形式注明
 */
String[] config() default "";
```

### 八、配置参数

| 参数名称                                 | 引擎  | 含义                                                         |
| ---------------------------------------- | ----- | ------------------------------------------------------------ |
| fire.kafka.cluster.map.                  | 通用  | 用于定义kafka集群别名                                        |
| kafka.conf.                              | 通用  | 用于设置kafka-client参数                                     |
| kafka.brokers.name                       | 通用  | 指定消费的kafka集群url或别名                                 |
| kafka.topics                             | 通用  | kafka的topic列表,以逗号分隔                                 |
| kafka.group.id                           | 通用  | 消费kafka的group id                                          |
| kafka.starting.offsets                   | 通用  | kafka起始消费位点                                            |
| kafka.ending.offsets                     | 通用  | kafka结束消费位点                                            |
| kafka.enable.auto.commit                 | 通用  | 是否自动维护offset                                           |
| kafka.failOnDataLoss                     | 通用  | 丢失数据是否失败                                             |
| kafka.session.timeout.ms                 | 通用  | kafka session超时时间                                        |
| kafka.request.timeout.ms                 | 通用  | kafka request超时时间                                        |
| kafka.max.poll.interval.ms               | 通用  | kafka的最大poll周期                                          |
| kafka.CommitOffsetsOnCheckpoints         | flink | 当checkpoint时是否提交offset                                 |
| kafka.StartFromTimestamp                 | flink | 从指定时间戳开始消费                                         |
| kafka.StartFromGroupOffsets              | flink | 从指定offset开始消费                                         |
| kafka.force.overwrite.stateOffset.enable | flink | 是否使状态中存放的offset不生效(请谨慎配置,用于kafka集群迁移等不正常状况的运维) |
| kafka.force.autoCommit.enable            | flink | 是否在开启checkpoint的情况下强制开启周期性offset提交         |
| kafka.force.autoCommit.Interval          | Flink | 周期性提交offset的时间间隔(ms)                             |



================================================
FILE: docs/connector/oracle.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

## Flink oracle connector

*Flink oracle connector基于jdbc sql connector改造,使用方法同flink标准的jdbc sql connector,fire框架能根据jdbc url自动识别是mysql还是oracle。*



================================================
FILE: docs/connector/rocketmq.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# RocketMQ消息接入

### 一、API使用

使用fire框架可以很方便的消费rocketmq中的数据,并且支持在同一任务中消费多个rocketmq集群的多个topic。核心代码仅一行:

```scala
// Spark Streaming或flink streaming任务
val dstream = this.fire.createRocketMqPullStream()
```

以上的api均支持rocketmq相关参数的传入,但fire推荐将这些集群信息放到配置文件中,增强代码可读性,提高代码简洁性与灵活性。

### 二、flink sql connector

```scala
this.fire.sql("""
                    |CREATE table source (
                    |  id bigint,
                    |  name string,
                    |  age int,
                    |  length double,
                    |  data DECIMAL(10, 5)
                    |) WITH
                    |   (
                    |   'connector' = 'fire-rocketmq',
                    |   'format' = 'json',
                    |   'rocket.brokers.name' = 'zms',
                    |   'rocket.topics'       = 'fire',
                    |   'rocket.group.id'     = 'fire',
                    |   'rocket.consumer.tag' = '*'
                    |   )
                    |""".stripMargin)
```

**with参数的使用:**

  Rocketmq sql connector中的with参数复用了api中的配置参数,如果需要进行rocketmq-client相关参数设置,可以以rocket.conf.为前缀,后面跟上rocketmq调优参数即可。

### 二、RocketMQ配置

#### 2.1 基于注解

```scala
@RocketMQ(brokers = "bigdata_test", topics = "fire", groupId = "fire", tag = "*")
@RocketMQ2(brokers = "bigdata_test", topics = "fire2", groupId = "fire2", tag = "*", startingOffset = "latest")
```

#### 2.2 基于配置文件

```properties
spark.rocket.brokers.name													=			rocketmq01:9876;rocketmq02:9876
spark.rocket.topics																=			topic_name
spark.rocket.group.id															=			groupId
spark.rocket.pull.max.speed.per.partition					=			15000
spark.rocket.consumer.tag													=			*
# 以spark.rocket.conf开头的配置支持所有rocket client的配置
spark.rocket.conf.pull.max.speed.per.partition		=   	5000
```

### 三、多RocketMQ多topic消费

  实际生产场景下,会有同一个任务消费多个RocketMQ集群,多个topic的情况。面对这种需求,fire是如何应对的呢?fire框架约定,配置的key后缀区分不同的RocketMQ配置项,详见以下配置列表:

```properties
# 以下配置中指定了两个RocketMQ集群信息
spark.rocket.brokers.name													=			localhost:9876;localhost02:9876
spark.rocket.topics																=			topic_name
spark.rocket.consumer.instance										=			FireFramework
spark.rocket.group.id															=			groupId

# 注意key的数字后缀,对应代码中的keyNum=2
spark.rocket.brokers.name2												=			localhost:9876;localhost02:9876
spark.rocket.topics2															=			topic_name2
spark.rocket.consumer.instance2										=			FireFramework
spark.rocket.group.id2														=			groupId2
```

那么,代码中是如何关联带有数字后缀的key的呢?答案是通过keyNum参数来指定:

```scala
// 对应spark.rocket.brokers.name这个RocketMQ集群
val dstream = this.fire.createRocketMqPullStream(keyNum=1)
// 对应spark.rocket.brokers.name2这个RocketMQ集群
val dstream2 = this.fire.createRocketMqPullStream(keyNum=2)
```

### 四、RocketMQ-client参数调优

有时,需要对RocketMQ消费进行client端的调优,fire支持所有的RocketMQ-client参数,这些参数只需要添加到配置文件中即可生效:

```properties
# 以spark.rocket.conf开头的配置支持所有rocket client的配置
spark.rocket.conf.pull.max.speed.per.partition		=   5000
```

### 五、offset提交

#### 5.1 主动提交

```scala
dstream.rocketCommitOffsets()
```

#### 5.2 自动提交

  Spark streaming在处理数据过程中,由于offset提交与数据处理可能不再一个算子中,就会出现stage失败,数据丢失,但offset却提交了。为了解决这个问题,fire框架提供了***foreachRDDAtLeastOnce***算子,来保证计算的数据不丢,失败重试(默认3次),成功自动提交等特性。

```scala
@Streaming(20) // spark streaming的批次时间
@RocketMQ(brokers = "bigdata_test", topics = "fire", groupId = "fire", tag = "*")
object AtLeastOnceTest extends BaseSparkStreaming {

  override def process: Unit = {
    val dstream = this.fire.createRocketMqPullStream()

    // 至少一次的语义保证,处理成功自动提交offset,处理失败会重试指定次数,如果仍失败则任务退出
    dstream.foreachRDDAtLeastOnce(rdd => {
      val studentRDD = rdd.map(t => JSONUtils.parseObject[Student](t.value())).repartition(2)
      val insertSql = s"INSERT INTO spark_test(name, age, createTime, length, sex) VALUES (?, ?, ?, ?, ?)"
      println("kafka.brokers.name=>" + this.conf.getString("kafka.brokers.name"))
      studentRDD.toDF().jdbcBatchUpdate(insertSql, Seq("name", "age", "createTime", "length", "sex"), batch = 1)
    })

    this.fire.start
  }
}
```

### 五、代码示例

[1. spark示例代码](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/streaming/RocketTest.scala)

[2. flink streaming示例代码](../fire-examples/flink-examples/src/main/scala/com/zto/fire/examples/flink/connector/rocketmq/RocketTest.scala)

[3. flink sql connector示例代码](../fire-examples/flink-examples/src/main/scala/com/zto/fire/examples/flink/connector/rocketmq/RocketMQConnectorTest.scala)

### 六、@RocketMQ

```java
/**
 * rocketmq集群连接信息
 */
String brokers();

/**
 * rocketmq topics,多个使用逗号分隔
 */
String topics();

/**
 * 消费者标识
 */
String groupId();

/**
 * 指定消费的tag
 */
String tag() default "*";

/**
 * 指定从何处开始消费
 */
String startingOffset() default "";

/**
 * 是否开启主动提交offset
 */
boolean autoCommit() default false;

/**
 * RocketMQ-client参数,以key=value形式注明
 */
String[] config() default "";
```

### 七、配置参数

| 参数名称                            | 引擎  | 含义                                                         |
| ----------------------------------- | ----- | ------------------------------------------------------------ |
| fire.rocket.cluster.map.            | 通用  | 用于配置rocketmq集群别名                                     |
| rocket.conf.                        | 通用  | 通过约定固定的前缀,支持rocketmq-client的所有参数            |
| rocket.brokers.name                 | 通用  | nameserver 地址或别名                                        |
| rocket.topics                       | 通用  | 主题名称                                                     |
| rocket.group.id                     | 通用  | 消费者id                                                     |
| rocket.failOnDataLoss               | 通用  | 丢失数据是否失败                                             |
| rocket.forceSpecial                 | 通用  | 如果 forceSpecial 为true,rocketmq 无论如何都会从特定的可用偏移量开始消费 |
| rocket.enable.auto.commit           | 通用  | 是否自动提交offset                                           |
| rocket.starting.offsets             | 通用  | RocketMQ起始消费位点                                         |
| rocket.consumer.tag                 | 通用  | rocketMq订阅的tag                                            |
| rocket.pull.max.speed.per.partition | spark | 每次拉取每个partition的消息数                                |
| rocket.consumer.instance            | spark | 用于区分不同的消费者实例                                     |
| ocket.sink.parallelism              | flink | sink的并行度                                                 |



================================================
FILE: docs/datasource.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# Spark DataSource增强

  Spark DataSource API很强大,为了进一步增强灵活性,Fire框架针对DataSource API做了进一步封装,允许将options等信息放到配置文件中,提高灵活性,如果与实时平台的配置中心集成,可做到重启即完成调优。

[示例程序:](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/datasource/DataSourceTest.scala)

```scala
val ds = this.fire.createDataFrame(Student.newStudentList(), classOf[Student])
ds.createOrReplaceTempView("test")

val dataFrame = this.fire.sql("select * from test")

// 一、 dataFrame.write.format.mode.save中的所有参数均可通过配置文件指定
// dataFrame.writeEnhance()

// 二、 dataFrame.write.mode.save中部分参数通过配置文件指定,或全部通过方法硬编码指定
val savePath = "/user/hive/warehouse/hudi.db/hudi_bill_event_test"

// 如果代码中与配置文件中均指定了option,则相同的options配置文件优先级更高,不同的option均生效
val options = Map(
    "hoodie.datasource.write.recordkey.field" -> "id",
    "hoodie.datasource.write.precombine.field" -> "id"
)

// 使用keyNum标识读取配置文件中不同配置后缀的options信息
// dataFrame.writeEnhance("org.apache.hudi", SaveMode.Append, savePath, options = options, keyNum = 2)

// read.format.mode.load(path)
this.fire.readEnhance(keyNum = 3)
```

配置文件:

```properties
# 一、hudi datasource,全部基于配置文件进行配置
spark.datasource.format=org.apache.hudi
spark.datasource.saveMode=Append
# 用于区分调用save(path)还是saveAsTable
spark.datasource.isSaveTable=false
# 传入到底层save或saveAsTable方法中
spark.datasource.saveParam=/user/hive/warehouse/hudi.db/hudi_bill_event_test

# 以spark.datasource.options.为前缀的配置用于配置hudi相关的参数,可覆盖代码中同名的配置
spark.datasource.options.hoodie.datasource.write.recordkey.field=id
spark.datasource.options.hoodie.datasource.write.precombine.field=id
spark.datasource.options.hoodie.datasource.write.partitionpath.field=ds
spark.datasource.options.hoodie.table.name=hudi.hudi_bill_event_test
spark.datasource.options.hoodie.datasource.write.hive_style_partitioning=true
spark.datasource.options.hoodie.datasource.write.table.type=MERGE_ON_READ
spark.datasource.options.hoodie.insert.shuffle.parallelism=128
spark.datasource.options.hoodie.upsert.shuffle.parallelism=128
spark.datasource.options.hoodie.fail.on.timeline.archiving=false
spark.datasource.options.hoodie.clustering.inline=true
spark.datasource.options.hoodie.clustering.inline.max.commits=8
spark.datasource.options.hoodie.clustering.plan.strategy.target.file.max.bytes=1073741824
spark.datasource.options.hoodie.clustering.plan.strategy.small.file.limit=629145600
spark.datasource.options.hoodie.clustering.plan.strategy.daybased.lookback.partitions=2

# 二、配置第二个数据源,以数字后缀作为区分,部分使用配置文件进行配置
spark.datasource.format2=org.apache.hudi2
spark.datasource.saveMode2=Overwrite
# 用于区分调用save(path)还是saveAsTable
spark.datasource.isSaveTable2=false
# 传入到底层save或saveAsTable方法中
spark.datasource.saveParam2=/user/hive/warehouse/hudi.db/hudi_bill_event_test2

# 三、配置第三个数据源,用于代码中进行read操作
spark.datasource.format3=org.apache.hudi3
spark.datasource.loadParam3=/user/hive/warehouse/hudi.db/hudi_bill_event_test3
spark.datasource.options.hoodie.datasource.write.recordkey.field3=id3
```



================================================
FILE: docs/dev/config.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# 参数配置

  实时计算任务调优参数多种多样,繁杂众多,使用fire框架可以很方便的进行各种参数的设置。在方便开发者开发和调优的同时,业务平台集成提供了配置接口,实现平台化的配置管理。

### 1. 用户参数配置

  Fire框架支持基于接口、apollo、配置文件以及注解等多种方式配置,支持将spark&flink等**引擎参数**、**[fire框架参数](properties.md)**以及**用户自定义参数**混合配置,支持运行时动态修改配置。几种常用配置方式如下([*fire内置参数*](properties.md)):

- **基于配置文件:** 创建类名同名的properties文件进行参数配置
- **基于接口配置:** fire框架提供了配置接口调用,通过接口获取所需的配置,可用于平台化的配置管理
- **基于注解配置:** 通过注解的方式实现集群环境、connector、调优参数的配置

#### 1.1 基于注解

```scala
// 通用的配置注解,支持任意的参数,还可以替代connector(如@Hive、@Kafka)类型参数,支持注释和多行配置
@Config(
  """
    |# 支持Flink调优参数、Fire框架参数、用户自定义参数等
    |state.checkpoints.num-retained=30
    |state.checkpoints.dir=hdfs:///user/flink/checkpoint
    |my.conf=hello
    |""")
// 配置连接到指定的hive,支持别名:@Hive("test"),别名需在cluster.properties中指定
@Hive("thrift://localhost:9083")
// 100s做一次checkpoint,开启非对齐checkpoint,还支持checkpoint其他设置,如超时时间,两次checkpoint间隔时间等
@Checkpoint(interval = 100, unaligned = true)
// 配置kafka connector,多个kafka消费通过不同数值后缀区分:@Kafka2、@Kafka3、@Kafka5等,支持url或别名
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
// 配置rocketmq connector,同样支持消费多个rocketmq,支持url或别名
@RocketMQ(brokers = "bigdata_test", topics = "fire", groupId = "fire", tag = "*", startingOffset = "latest")
// jdbc注解,可自动推断driverClass,支持配置多个jdbc数据源,支持url或别名
@Jdbc(url = "jdbc:mysql://mysql-server:3306/fire", username = "root", password = "..root726")
// 配置Hbase数据源,支持配置多HBase集群读写,支持url或别名
@HBase("localhost:2181")
```

#### 1.2 基于配置文件

  Fire框架约定,在任务启动时自动加载与该任务同名的,位于resources目录下以.properties结尾的配置文件(支持目录)。配置文件中如果定义了与@Config注解或者其他配置注解相同的配置时,配置文件中的优先级更高。[*fire框架参数*](properties.md)

<img src="./img/configuration.png"></img>

  另外,如果同一个项目中有多个任务共用一些配置信息,比如jdbc url、hbase集群地址等,可以将这些公共的配置放到resources目录下名为**common.properties**配置文件中。这样每个任务在启动前会先加载这个配置文件,实现配置复用。common.properties中的配置优先级低于任务级别的配置。

#### 1.3 基于平台

  上述两种,无论是基于注解还是基于配置文件,修改参数时,都需要修改代码然后重新编译发布执行。为了节约开发时间,fire框架提供了参数设置接口,实时平台可通过接口调用的方式将web页面中任务级别的配置设置到不同的任务中,以此来实现在web页面中进行实时任务的调优。接口调用的参数优先级要高于配置文件和注解方式。

<img src="./img/web-config.png"></img>

#### 1.4 配置热更新

  集成了fire框架的spark或flink任务,支持在运行时动态修改用户配置。比如想修改一个运行中任务的jdbc batch的大小,可以通过fire框架提供的配置热更新接口来实现。当接口调用后,fire框架会将最新的配置信息分布式的同步给spark的每一个executor以及flink的每一个taskmanager。

### 2. 配置获取

  Fire框架封装了统一的配置获取api,基于该api,无论是spark还是flink,无论是在Driver | JobManager端还是在Executor | TaskManager端,都可以直接一行代码获取所需配置。这套配置获取api,无需再在flink的map等算子中复写open方法了,用起来十分方便。

```scala
this.conf.getString("my.conf")
this.conf.getInt("state.checkpoints.num-retained")
...
```

### 3. 实时平台配置

  Fire框架是实时计算任务与实时平台之间沟通的桥梁,在设计之初,就充分考虑了与实时平台的集成。对于一些集群连接等敏感配置等配置,可通过配置中心来实现统一的约束。比如当迁移hive thrift地址时,可以在配置中心修改该地址,然后将配置的优先级调高为紧急,再通知对应实时任务重启任务即可实现hive thrift地址的统一修改。定义为紧急的配置,优先级是最高的,这样变实现了实时平台配置的统一兜底管理。

### 4. 配置别名

  配置使用url是不方便记忆的,也不便于统一管理和维护。将如某个数据源的url地址需要改动,那很多任务都要受牵连。为了解决这个问题,fire框架支持将数据源的url定义别名,效果如下所示:

```scala
// 直接使用url
@Hive("thrift://localhost:9083")
@HBase("localhost:2181")

// 使用别名
@Hive("batch")
@HBase("test")
```

建议将别名统一定义到 *[cluster.properties](..//fire-core/src/main/resources/cluster.properties)* 配置文件中,以下分别举几个例子说明如何定义数据源别名:

```properties
# 定义hbase集群连接信息别名为test,代码中hbase配置简化为:@HBase("test")
fire.hbase.cluster.map.test=zk01:2181,zk02:2181,zk03:2181
# kafka集群名称与集群地址映射mq,代码中kafka配置简化为:@Kafka(brokers = "mq", topics = "fire", groupId = "fire")
fire.kafka.cluster.map.mq=kafka01:9092,kafka02:9092,kafka03:9092
# hive metastore地址定义别名为batch,则代码中配置简化为:@Hive("batch")
fire.hive.cluster.map.batch=thrift://thrift01:9083,thrift://thrift02:9083
```



### 5. 配置优先级

Fire框架提供了很多种参数配置的方式,总结下来相同key的配置优先级如下:

***fire.properties <  cluster.properties < 配置中心通用配置 < spark.properties|flink.properties < spark-core.properties|spark-streaming.properties|structured-streaming.properties|flink-streaming.properties|flink-batch.properties  < common.properties < 注解配置方式 < 用户配置文件 < 配置中心紧急配置***

### 6. fire内置配置文件

Fire框架内置了多个配置文件,用于应对多种引擎场景,分别是:

- **fire.properties**:该配置文件中fire框架的总配置文件,位于fire-core包中,其中的配置主要是针对fire框架的,不含有spark或flink引擎的配置
- **cluster.properties**:该配置文件用于存放各公司集群地址相关的映射信息,由于集群地址信息比较敏感,因此单独拿出来作为一个配置文件
- **spark.properties**:该配置文件是spark引擎的总配置文件,位于fire-spark包中,作为spark引擎任务的总配置文件
- **spark-core.properties**:该配置文件位于fire-spark包中,该配置文件用于配置spark core任务
- **spark-streaming.properties**:该配置文件位于fire-spark包中,主要用于spark streaming任务
- **structured-streaming.properties**:该配置文件位于fire-spark包中,用于进行structured streaming任务的配置
- **flink.properties**:该配置文件位于fire-flink包中,作为flink引擎的总配置文件
- **flink-streaming.properties**:该配置文件位于fire-flink包中,用于配置flink streaming任务
- **flink-batch.properties**:该配置文件位于fire-flink包中,用于配置flink批处理任务


================================================
FILE: docs/dev/deploy-script.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

## 任务提交脚本

### 一、Flink on yarn

***说明:**强烈建议使用flink的run application模式提交任务,在run application模式下可以使用fire更多的功能,包括restful接口,配置管理等,便于与实时平台集成,per-job模式不推荐使用。*

```shell
#!/bin/bash
# author: wangchenglong
# date: 2022-06-30 13:10:13
# desc:提交flink任务通用脚本
# usage:./deploy.sh com.zto.fire.examples.flink.Test

export FLINK_HOME=/opt/flink-1.14.3
export PATH=$FLINK_HOME/bin:$PATH

# 以run application模式提交flink任务到yarn上
flink run-application -t yarn-application \	# 使用run-application模式提交,让flink任务与实时平台具有交互能力
-D taskmanager.memory.process.size=4g \
-D state.checkpoints.dir=hdfs:///user/flink/checkpoint/fire \
-D flink.stream.checkpoint.interval=6000 \
-D fire.shutdown.auto.exit=true \	# 可通过-D方式指定flink引擎参数、fire框架参数或用户自定义参数,代码中通过this.conf.get获取参数值
--allowNonRestoredState \
-s hdfs:/user/flink/checkpoint/xxx/chk-5/_metadata \	# 指定checkpoint路径
-ynm fire_test -yqu root.default -ynm test -ys 1 -ytm 2g -c $1 zto-flink*.jar $*
```

### 二、Spark on yarn

```shell
#!/bin/bash
# author: wangchenglong
# date: 2022-06-30 13:24:13
# desc:提交spark任务通用脚本
# usage:./deploy.sh com.zto.fire.examples.spark.Test

export SPARK_HOME=/opt/spark3.0.2
export PATH=$SPARK_HOME/bin:$PATH

# 以cluster模式提交spark任务到yarn上
spark-submit \
--master yarn --deploy-mode cluster --class $1 --num-executors 20 --executor-cores 1 \
--driver-memory 1g --executor-memory 1g \
--conf fire.shutdown.auto.exit=true \	# 可通过--conf方式指定spark引擎参数、fire框架参数或用户自定义参数,通过this.conf.get获取参数值
./zto-spark*.jar $*
```

================================================
FILE: docs/dev/engine-env.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# 依赖管理

  Fire框架中很多依赖的scope指定为provided,好处是避免jar包冲突、避免jar包过于臃肿。带来的问题是会引擎依赖找不到(class not found)异常。解决这个问题的方案有两个,一个是将fire中使用到的生命周期为provided的依赖改成compile(任务的pom.xml中指定),一个是将相应依赖的jar包放到spark或flink的lib目录下。本文档选择的是第二个方案,将缺失的依赖放到引擎部署目录的lib下。缺失的jar包可以从[网盘下载](https://pan.baidu.com/s/16kUGQIj2gQjWZdbmuxuyXw?pwd=fire)、[*maven中央仓库*](http://mvnrepository.com/)或本地仓库搜索找到。依赖列表如下:

## 一、Flink on yarn环境

***说明:**下面的jar包列表以flink1.14.3为例*

```shell
[fire@node01 lib]$ pwd
/home/fire/opt/flink-1.14.3/lib
[fire@node01 lib]$ ll
总用量 307580
-rwxr-xr-x 1 fire fire   1112191 2月  17 13:43 antlr-3.4.jar
-rwxr-xr-x 1 fire fire    164368 2月  17 13:43 antlr-runtime-3.4.jar
-rwxr-xr-x 1 fire fire   7685845 1月  26 13:47 flink-connector-hive_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire    255974 4月  21 14:31 flink-connector-jdbc_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire    250201 2月  16 18:27 flink-connector-jdbc_2.12-1.14.3.jar.bak
-rwxr-xr-x 1 fire fire    389763 4月  18 15:30 flink-connector-kafka_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire     85584 1月  11 07:42 flink-csv-1.14.3.jar
-rwxr-xr-x 1 fire fire 136054094 1月  11 07:45 flink-dist_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire     78159 2月  14 13:20 flink-hadoop-compatibility_2.12-1.14.2.jar
-rwxr-xr-x 1 fire fire    153145 1月  11 07:42 flink-json-1.14.3.jar
-rwxr-xr-x 1 fire fire    757120 2月  14 13:23 flink-scala_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire  24051799 2月  16 17:42 flink-shaded-hadoop-2-2.6.5-9.0.jar
-rwxr-xr-x 1 fire fire   7709731 8月  22 2021  flink-shaded-zookeeper-3.4.14.jar
-rwxr-xr-x 1 fire fire  39633410 1月  11 07:45 flink-table_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire    119407 2月  16 18:34 flink-table-api-java-bridge_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire     79018 2月  16 18:35 flink-table-api-scala_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire     50197 2月  16 18:35 flink-table-api-scala-bridge_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire    928547 2月  16 18:34 flink-table-common-1.14.3.jar
-rwxr-xr-x 1 fire fire  35774469 2月  16 18:34 flink-table-planner_2.12-1.14.3.jar
-rwxr-xr-x 1 fire fire    241622 2月  17 09:29 gson-2.8.5.jar
-rwxr-xr-x 1 fire fire   2256213 2月  16 18:42 guava-18.0.jar
-rwxr-xr-x 1 fire fire      9808 2月  17 09:24 hadoop-client-2.6.0-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire   3539744 2月  17 09:24 hadoop-common-2.6.0-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire   1765829 2月  17 09:24 hadoop-core-2.6.0-mr1-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire  11550546 2月  17 09:24 hadoop-hdfs-2.6.0-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire   1321508 2月  16 21:18 hbase-client-1.2.0-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire    585568 2月  16 21:18 hbase-common-1.2.0-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire   4618035 2月  16 21:18 hbase-protocol-1.2.0-cdh5.12.1.jar
-rwxr-xr-x 1 fire fire    292289 2月  18 17:39 hive-common-1.2.1.jar
-rwxr-xr-x 1 fire fire  20599029 2月  18 17:39 hive-exec-1.2.1.jar
-rwxr-xr-x 1 fire fire   5505100 2月  18 17:39 hive-metastore-1.2.1.jar
-rwxr-xr-x 1 fire fire     95806 2月  16 18:46 javax.servlet-api-3.1.0.jar
-rwxr-xr-x 1 fire fire    201124 2月  17 09:29 jdo-api-3.0.1.jar
-rwxr-xr-x 1 fire fire   3269712 2月  16 18:33 kafka-clients-2.4.1.jar
-rwxr-xr-x 1 fire fire    275186 2月  17 09:29 libfb303-0.9.0.jar
-rwxr-xr-x 1 fire fire    208006 1月   9 04:13 log4j-1.2-api-2.17.1.jar
-rwxr-xr-x 1 fire fire    301872 1月   9 04:13 log4j-api-2.17.1.jar
-rwxr-xr-x 1 fire fire   1790452 1月   9 04:13 log4j-core-2.17.1.jar
-rwxr-xr-x 1 fire fire     24279 1月   9 04:13 log4j-slf4j-impl-2.17.1.jar
-rwxr-xr-x 1 fire fire     82123 2月  16 21:25 metrics-core-2.2.0.jar
-rwxr-xr-x 1 fire fire    992805 4月  21 15:15 mysql-connector-java-5.1.41.jar
```

## 二、Spark on yarn环境

***说明:**下面的jar包列表以spark3.0.2为例*。[网盘下载](https://pan.baidu.com/s/16kUGQIj2gQjWZdbmuxuyXw?pwd=fire)

```shell
[fire@node01 jars]$ pwd
/home/fire/opt/spark3.0.2/jars
[fire@node01 jars]$ ll
总用量 216276
-rw-r--r-- 1 fire fire    69409 2月  16 2021 activation-1.1.1.jar
-rw-r--r-- 1 fire fire   134044 2月  16 2021 aircompressor-0.10.jar
-rw-r--r-- 1 fire fire  1168113 2月  16 2021 algebra_2.12-2.0.0-M2.jar
-rw-r--r-- 1 fire fire   445288 2月  16 2021 antlr-2.7.7.jar
-rw-r--r-- 1 fire fire   336803 2月  16 2021 antlr4-runtime-4.7.1.jar
-rw-r--r-- 1 fire fire   164368 2月  16 2021 antlr-runtime-3.4.jar
-rw-r--r-- 1 fire fire     4467 2月  16 2021 aopalliance-1.0.jar
-rw-r--r-- 1 fire fire    27006 2月  16 2021 aopalliance-repackaged-2.6.1.jar
-rw-r--r-- 1 fire fire    44925 2月  16 2021 apacheds-i18n-2.0.0-M15.jar
-rw-r--r-- 1 fire fire   691479 2月  16 2021 apacheds-kerberos-codec-2.0.0-M15.jar
-rw-r--r-- 1 fire fire   448794 2月  16 2021 apache-log4j-extras-1.2.17.jar
-rw-r--r-- 1 fire fire    16560 2月  16 2021 api-asn1-api-1.0.0-M20.jar
-rw-r--r-- 1 fire fire    79912 2月  16 2021 api-util-1.0.0-M20.jar
-rw-r--r-- 1 fire fire  1194003 2月  16 2021 arpack_combined_all-0.1.jar
-rw-r--r-- 1 fire fire    64674 2月  16 2021 arrow-format-0.15.1.jar
-rw-r--r-- 1 fire fire   105777 2月  16 2021 arrow-memory-0.15.1.jar
-rw-r--r-- 1 fire fire  1437215 2月  16 2021 arrow-vector-0.15.1.jar
-rw-r--r-- 1 fire fire    20437 2月  16 2021 audience-annotations-0.5.0.jar
-rw-r--r-- 1 fire fire   176285 2月  16 2021 automaton-1.11-8.jar
-rw-r--r-- 1 fire fire  1556863 2月  16 2021 avro-1.8.2.jar
-rw-r--r-- 1 fire fire   132989 2月  16 2021 avro-ipc-1.8.2.jar
-rw-r--r-- 1 fire fire   187052 2月  16 2021 avro-mapred-1.8.2-hadoop2.jar
-rw-r--r-- 1 fire fire   110600 2月  16 2021 bonecp-0.8.0.RELEASE.jar
-rw-r--r-- 1 fire fire 13826799 2月  16 2021 breeze_2.12-1.0.jar
-rw-r--r-- 1 fire fire   134696 2月  16 2021 breeze-macros_2.12-1.0.jar
-rw-r--r-- 1 fire fire  3226851 2月  16 2021 cats-kernel_2.12-2.0.0-M4.jar
-rw-r--r-- 1 fire fire   211523 2月  16 2021 chill_2.12-0.9.5.jar
-rw-r--r-- 1 fire fire    58684 2月  16 2021 chill-java-0.9.5.jar
-rw-r--r-- 1 fire fire   246918 2月  16 2021 commons-beanutils-1.9.4.jar
-rw-r--r-- 1 fire fire    41123 2月  16 2021 commons-cli-1.2.jar
-rw-r--r-- 1 fire fire   284184 2月  16 2021 commons-codec-1.10.jar
-rw-r--r-- 1 fire fire   588337 2月  16 2021 commons-collections-3.2.2.jar
-rw-r--r-- 1 fire fire    71626 2月  16 2021 commons-compiler-3.0.16.jar
-rw-r--r-- 1 fire fire   632424 2月  16 2021 commons-compress-1.20.jar
-rw-r--r-- 1 fire fire   298829 2月  16 2021 commons-configuration-1.6.jar
-rw-r--r-- 1 fire fire   166244 2月  16 2021 commons-crypto-1.1.0.jar
-rw-r--r-- 1 fire fire   160519 2月  16 2021 commons-dbcp-1.4.jar
-rw-r--r-- 1 fire fire   143602 2月  16 2021 commons-digester-1.8.jar
-rw-r--r-- 1 fire fire   305001 2月  16 2021 commons-httpclient-3.1.jar
-rw-r--r-- 1 fire fire   185140 2月  16 2021 commons-io-2.4.jar
-rw-r--r-- 1 fire fire   284220 2月  16 2021 commons-lang-2.6.jar
-rw-r--r-- 1 fire fire   503880 2月  16 2021 commons-lang3-3.9.jar
-rw-r--r-- 1 fire fire    62050 2月  16 2021 commons-logging-1.1.3.jar
-rw-r--r-- 1 fire fire  2035066 2月  16 2021 commons-math3-3.4.1.jar
-rw-r--r-- 1 fire fire   273370 2月  16 2021 commons-net-3.1.jar
-rw-r--r-- 1 fire fire    96221 2月  16 2021 commons-pool-1.5.4.jar
-rw-r--r-- 1 fire fire   197176 2月  16 2021 commons-text-1.6.jar
-rw-r--r-- 1 fire fire    79845 2月  16 2021 compress-lzf-1.0.3.jar
-rw-r--r-- 1 fire fire   164422 2月  16 2021 core-1.1.2.jar
-rw-r--r-- 1 fire fire    69500 2月  16 2021 curator-client-2.7.1.jar
-rw-r--r-- 1 fire fire   186273 2月  16 2021 curator-framework-2.7.1.jar
-rw-r--r-- 1 fire fire   270342 2月  16 2021 curator-recipes-2.7.1.jar
-rw-r--r-- 1 fire fire   339666 2月  16 2021 datanucleus-api-jdo-3.2.6.jar
-rw-r--r-- 1 fire fire  1890075 2月  16 2021 datanucleus-core-3.2.10.jar
-rw-r--r-- 1 fire fire  1809447 2月  16 2021 datanucleus-rdbms-3.2.9.jar
-rw-r--r-- 1 fire fire  3224708 2月  16 2021 derby-10.12.1.1.jar
-rw-r--r-- 1 fire fire    18497 2月  16 2021 flatbuffers-java-1.9.0.jar
-rw-r--r-- 1 fire fire    14395 2月  16 2021 generex-1.0.2.jar
-rw-r--r-- 1 fire fire   190432 2月  16 2021 gson-2.2.4.jar
-rw-r--r-- 1 fire fire  2189117 2月  16 2021 guava-14.0.1.jar
-rw-r--r-- 1 fire fire   710492 2月  16 2021 guice-3.0.jar
-rw-r--r-- 1 fire fire    65012 2月  16 2021 guice-servlet-3.0.jar
-rw-r--r-- 1 fire fire    41094 2月  16 2021 hadoop-annotations-2.7.4.jar
-rw-r--r-- 1 fire fire    94621 2月  16 2021 hadoop-auth-2.7.4.jar
-rw-r--r-- 1 fire fire    26243 2月  16 2021 hadoop-client-2.7.4.jar
-rw-r--r-- 1 fire fire  3499224 2月  16 2021 hadoop-common-2.7.4.jar
-rw-r--r-- 1 fire fire  8350471 2月  16 2021 hadoop-hdfs-2.7.4.jar
-rw-r--r-- 1 fire fire   543852 2月  16 2021 hadoop-mapreduce-client-app-2.7.4.jar
-rw-r--r-- 1 fire fire   776862 2月  16 2021 hadoop-mapreduce-client-common-2.7.4.jar
-rw-r--r-- 1 fire fire  1558288 2月  16 2021 hadoop-mapreduce-client-core-2.7.4.jar
-rw-r--r-- 1 fire fire    62960 2月  16 2021 hadoop-mapreduce-client-jobclient-2.7.4.jar
-rw-r--r-- 1 fire fire    72050 2月  16 2021 hadoop-mapreduce-client-shuffle-2.7.4.jar
-rw-r--r-- 1 fire fire  2039372 2月  16 2021 hadoop-yarn-api-2.7.4.jar
-rw-r--r-- 1 fire fire   166121 2月  16 2021 hadoop-yarn-client-2.7.4.jar
-rw-r--r-- 1 fire fire  1679789 2月  16 2021 hadoop-yarn-common-2.7.4.jar
-rw-r--r-- 1 fire fire   388572 2月  16 2021 hadoop-yarn-server-common-2.7.4.jar
-rw-r--r-- 1 fire fire    58699 2月  16 2021 hadoop-yarn-server-web-proxy-2.7.4.jar
-rw-r--r-- 1 fire fire   138464 2月  16 2021 hive-beeline-1.2.1.spark2.jar
-rw-r--r-- 1 fire fire    40817 2月  16 2021 hive-cli-1.2.1.spark2.jar
-rw-r--r-- 1 fire fire 11498852 2月  16 2021 hive-exec-1.2.1.spark2.jar
-rw-r--r-- 1 fire fire   100680 2月  16 2021 hive-jdbc-1.2.1.spark2.jar
-rw-r--r-- 1 fire fire  5505200 2月  16 2021 hive-metastore-1.2.1.spark2.jar
-rw-r--r-- 1 fire fire   200223 2月  16 2021 hk2-api-2.6.1.jar
-rw-r--r-- 1 fire fire   203358 2月  16 2021 hk2-locator-2.6.1.jar
-rw-r--r-- 1 fire fire   131590 2月  16 2021 hk2-utils-2.6.1.jar
-rw-r--r-- 1 fire fire  1475955 2月  16 2021 htrace-core-3.1.0-incubating.jar
-rw-r--r-- 1 fire fire   767140 2月  16 2021 httpclient-4.5.6.jar
-rw-r--r-- 1 fire fire   328347 2月  16 2021 httpcore-4.4.12.jar
-rw-r--r-- 1 fire fire    27156 2月  16 2021 istack-commons-runtime-3.0.8.jar
-rw-r--r-- 1 fire fire  1282424 2月  16 2021 ivy-2.4.0.jar
-rw-r--r-- 1 fire fire    67889 2月  16 2021 jackson-annotations-2.10.0.jar
-rw-r--r-- 1 fire fire   348635 2月  16 2021 jackson-core-2.10.0.jar
-rw-r--r-- 1 fire fire   232248 2月  16 2021 jackson-core-asl-1.9.13.jar
-rw-r--r-- 1 fire fire  1400944 2月  16 2021 jackson-databind-2.10.0.jar
-rw-r--r-- 1 fire fire    46646 2月  16 2021 jackson-dataformat-yaml-2.10.0.jar
-rw-r--r-- 1 fire fire   105898 2月  16 2021 jackson-datatype-jsr310-2.10.3.jar
-rw-r--r-- 1 fire fire    18336 2月  16 2021 jackson-jaxrs-1.9.13.jar
-rw-r--r-- 1 fire fire   780664 2月  16 2021 jackson-mapper-asl-1.9.13.jar
-rw-r--r-- 1 fire fire    34991 2月  16 2021 jackson-module-jaxb-annotations-2.10.0.jar
-rw-r--r-- 1 fire fire    43740 2月  16 2021 jackson-module-paranamer-2.10.0.jar
-rw-r--r-- 1 fire fire   341862 2月  16 2021 jackson-module-scala_2.12-2.10.0.jar
-rw-r--r-- 1 fire fire    27084 2月  16 2021 jackson-xc-1.9.13.jar
-rw-r--r-- 1 fire fire    44399 2月  16 2021 jakarta.activation-api-1.2.1.jar
-rw-r--r-- 1 fire fire    25058 2月  16 2021 jakarta.annotation-api-1.3.5.jar
-rw-r--r-- 1 fire fire    18140 2月  16 2021 jakarta.inject-2.6.1.jar
-rw-r--r-- 1 fire fire    91930 2月  16 2021 jakarta.validation-api-2.0.2.jar
-rw-r--r-- 1 fire fire   140376 2月  16 2021 jakarta.ws.rs-api-2.1.6.jar
-rw-r--r-- 1 fire fire   115498 2月  16 2021 jakarta.xml.bind-api-2.3.2.jar
-rw-r--r-- 1 fire fire   926574 2月  16 2021 janino-3.0.16.jar
-rw-r--r-- 1 fire fire    16993 2月  16 2021 JavaEWAH-0.3.2.jar
-rw-r--r-- 1 fire fire   780265 2月  16 2021 javassist-3.25.0-GA.jar
-rw-r--r-- 1 fire fire     2497 2月  16 2021 javax.inject-1.jar
-rw-r--r-- 1 fire fire    95806 2月  16 2021 javax.servlet-api-3.1.0.jar
-rw-r--r-- 1 fire fire   395195 2月  16 2021 javolution-5.5.1.jar
-rw-r--r-- 1 fire fire   105134 2月  16 2021 jaxb-api-2.2.2.jar
-rw-r--r-- 1 fire fire  1013367 2月  16 2021 jaxb-runtime-2.3.2.jar
-rw-r--r-- 1 fire fire    16537 2月  16 2021 jcl-over-slf4j-1.7.30.jar
-rw-r--r-- 1 fire fire   201124 2月  16 2021 jdo-api-3.0.1.jar
-rw-r--r-- 1 fire fire   244502 2月  16 2021 jersey-client-2.30.jar
-rw-r--r-- 1 fire fire  1166647 2月  16 2021 jersey-common-2.30.jar
-rw-r--r-- 1 fire fire    32091 2月  16 2021 jersey-container-servlet-2.30.jar
-rw-r--r-- 1 fire fire    73349 2月  16 2021 jersey-container-servlet-core-2.30.jar
-rw-r--r-- 1 fire fire    76733 2月  16 2021 jersey-hk2-2.30.jar
-rw-r--r-- 1 fire fire    85815 2月  16 2021 jersey-media-jaxb-2.30.jar
-rw-r--r-- 1 fire fire   927721 2月  16 2021 jersey-server-2.30.jar
-rw-r--r-- 1 fire fire   539912 2月  16 2021 jetty-6.1.26.jar
-rw-r--r-- 1 fire fire    18891 2月  16 2021 jetty-sslengine-6.1.26.jar
-rw-r--r-- 1 fire fire   177131 2月  16 2021 jetty-util-6.1.26.jar
-rw-r--r-- 1 fire fire   232470 2月  16 2021 JLargeArrays-1.5.jar
-rw-r--r-- 1 fire fire   268780 2月  16 2021 jline-2.14.6.jar
-rw-r--r-- 1 fire fire   643043 2月  16 2021 joda-time-2.10.5.jar
-rw-r--r-- 1 fire fire   427780 2月  16 2021 jodd-core-3.5.2.jar
-rw-r--r-- 1 fire fire    12131 2月  16 2021 jpam-1.1.jar
-rw-r--r-- 1 fire fire    83632 2月  16 2021 json4s-ast_2.12-3.6.6.jar
-rw-r--r-- 1 fire fire   482486 2月  16 2021 json4s-core_2.12-3.6.6.jar
-rw-r--r-- 1 fire fire    36175 2月  16 2021 json4s-jackson_2.12-3.6.6.jar
-rw-r--r-- 1 fire fire   349025 2月  16 2021 json4s-scalap_2.12-3.6.6.jar
-rw-r--r-- 1 fire fire   100636 2月  16 2021 jsp-api-2.1.jar
-rw-r--r-- 1 fire fire    33031 2月  16 2021 jsr305-3.0.0.jar
-rw-r--r-- 1 fire fire    15071 2月  16 2021 jta-1.1.jar
-rw-r--r-- 1 fire fire  1175798 2月  16 2021 JTransforms-3.1.jar
-rw-r--r-- 1 fire fire     4592 2月  16 2021 jul-to-slf4j-1.7.30.jar
-rw-r--r-- 1 fire fire   410874 2月  16 2021 kryo-shaded-4.0.2.jar
-rw-r--r-- 1 fire fire   775174 2月  16 2021 kubernetes-client-4.9.2.jar
-rw-r--r-- 1 fire fire 11908731 2月  16 2021 kubernetes-model-4.9.2.jar
-rw-r--r-- 1 fire fire     3954 2月  16 2021 kubernetes-model-common-4.9.2.jar
-rw-r--r-- 1 fire fire  1045744 2月  16 2021 leveldbjni-all-1.8.jar
-rw-r--r-- 1 fire fire   313702 2月  16 2021 libfb303-0.9.3.jar
-rw-r--r-- 1 fire fire   246445 2月  16 2021 libthrift-0.12.0.jar
-rw-r--r-- 1 fire fire   489884 2月  16 2021 log4j-1.2.17.jar
-rw-r--r-- 1 fire fire    12488 2月  16 2021 logging-interceptor-3.12.6.jar
-rw-r--r-- 1 fire fire   649950 2月  16 2021 lz4-java-1.7.1.jar
-rw-r--r-- 1 fire fire    33786 2月  16 2021 machinist_2.12-0.6.8.jar
-rw-r--r-- 1 fire fire     3180 2月  16 2021 macro-compat_2.12-1.1.1.jar
-rw-r--r-- 1 fire fire  7343426 2月  16 2021 mesos-1.4.0-shaded-protobuf.jar
-rw-r--r-- 1 fire fire   105365 2月  16 2021 metrics-core-4.1.1.jar
-rw-r--r-- 1 fire fire    22042 2月  16 2021 metrics-graphite-4.1.1.jar
-rw-r--r-- 1 fire fire    20889 2月  16 2021 metrics-jmx-4.1.1.jar
-rw-r--r-- 1 fire fire    16642 2月  16 2021 metrics-json-4.1.1.jar
-rw-r--r-- 1 fire fire    23909 2月  16 2021 metrics-jvm-4.1.1.jar
-rw-r--r-- 1 fire fire     5711 2月  16 2021 minlog-1.3.0.jar
-rw-r--r-- 1 fire fire  4153218 2月  16 2021 netty-all-4.1.47.Final.jar
-rw-r--r-- 1 fire fire    54391 2月  16 2021 objenesis-2.5.1.jar
-rw-r--r-- 1 fire fire   423175 2月  16 2021 okhttp-3.12.6.jar
-rw-r--r-- 1 fire fire    88732 2月  16 2021 okio-1.15.0.jar
-rw-r--r-- 1 fire fire    19827 2月  16 2021 opencsv-2.3.jar
-rw-r--r-- 1 fire fire  1580620 2月  16 2021 orc-core-1.5.10-nohive.jar
-rw-r--r-- 1 fire fire   814061 2月  16 2021 orc-mapreduce-1.5.10-nohive.jar
-rw-r--r-- 1 fire fire    27749 2月  16 2021 orc-shims-1.5.10.jar
-rw-r--r-- 1 fire fire    65261 2月  16 2021 oro-2.0.8.jar
-rw-r--r-- 1 fire fire    19479 2月  16 2021 osgi-resource-locator-1.0.3.jar
-rw-r--r-- 1 fire fire    34654 2月  16 2021 paranamer-2.8.jar
-rw-r--r-- 1 fire fire  1097799 2月  16 2021 parquet-column-1.10.1.jar
-rw-r--r-- 1 fire fire    94995 2月  16 2021 parquet-common-1.10.1.jar
-rw-r--r-- 1 fire fire   848750 2月  16 2021 parquet-encoding-1.10.1.jar
-rw-r--r-- 1 fire fire   723203 2月  16 2021 parquet-format-2.4.0.jar
-rw-r--r-- 1 fire fire   285732 2月  16 2021 parquet-hadoop-1.10.1.jar
-rw-r--r-- 1 fire fire  2796935 2月  16 2021 parquet-hadoop-bundle-1.6.0.jar
-rw-r--r-- 1 fire fire  1048171 2月  16 2021 parquet-jackson-1.10.1.jar
-rw-r--r-- 1 fire fire   533455 2月  16 2021 protobuf-java-2.5.0.jar
-rw-r--r-- 1 fire fire   123052 2月  16 2021 py4j-0.10.9.jar
-rw-r--r-- 1 fire fire   100431 2月  16 2021 pyrolite-4.30.jar
-rw-r--r-- 1 fire fire   325335 2月  16 2021 RoaringBitmap-0.7.45.jar
-rw-r--r-- 1 fire fire   112235 2月  16 2021 scala-collection-compat_2.12-2.1.1.jar
-rw-r--r-- 1 fire fire 10672015 2月  16 2021 scala-compiler-2.12.10.jar
-rw-r--r-- 1 fire fire  5276900 2月  16 2021 scala-library-2.12.10.jar
-rw-r--r-- 1 fire fire   222980 2月  16 2021 scala-parser-combinators_2.12-1.1.2.jar
-rw-r--r-- 1 fire fire  3678534 2月  16 2021 scala-reflect-2.12.10.jar
-rw-r--r-- 1 fire fire   556575 2月  16 2021 scala-xml_2.12-1.2.0.jar
-rw-r--r-- 1 fire fire  3243337 2月  16 2021 shapeless_2.12-2.3.3.jar
-rw-r--r-- 1 fire fire     4028 2月  16 2021 shims-0.7.45.jar
-rw-r--r-- 1 fire fire    41472 2月  16 2021 slf4j-api-1.7.30.jar
-rw-r--r-- 1 fire fire    12211 2月  16 2021 slf4j-log4j12-1.7.30.jar
-rw-r--r-- 1 fire fire   302558 2月  16 2021 snakeyaml-1.24.jar
-rw-r--r-- 1 fire fire    48720 2月  16 2021 snappy-0.2.jar
-rw-r--r-- 1 fire fire  1969177 2月  16 2021 snappy-java-1.1.8.2.jar
-rw-r--r-- 1 fire fire  9409634 2月  16 2021 spark-catalyst_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire  9880087 2月  16 2021 spark-core_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   430762 2月  16 2021 spark-graphx_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   693694 2月  16 2021 spark-hive_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire  1886671 2月  16 2021 spark-hive-thriftserver_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   374948 2月  16 2021 spark-kubernetes_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    59868 2月  16 2021 spark-kvstore_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    75937 2月  16 2021 spark-launcher_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   295158 2月  16 2021 spark-mesos_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire  5887713 2月  16 2021 spark-mllib_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   111921 2月  16 2021 spark-mllib-local_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire  2397705 2月  16 2021 spark-network-common_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    86942 2月  16 2021 spark-network-shuffle_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    52496 2月  16 2021 spark-repl_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    30353 2月  16 2021 spark-sketch_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire  7160215 2月  16 2021 spark-sql_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire  1138146 2月  16 2021 spark-streaming_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    15155 2月  16 2021 spark-tags_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire    10375 2月  16 2021 spark-tags_2.12-3.0.2-tests.jar
-rw-r--r-- 1 fire fire    51308 2月  16 2021 spark-unsafe_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   331837 7月  28 2021 spark-yarn_2.12-3.0.2.jar
-rw-r--r-- 1 fire fire   331935 2月  16 2021 spark-yarn_2.12-3.0.2.jar.bak
-rw-r--r-- 1 fire fire  7188024 2月  16 2021 spire_2.12-0.17.0-M1.jar
-rw-r--r-- 1 fire fire    79588 2月  16 2021 spire-macros_2.12-0.17.0-M1.jar
-rw-r--r-- 1 fire fire     8261 2月  16 2021 spire-platform_2.12-0.17.0-M1.jar
-rw-r--r-- 1 fire fire    34601 2月  16 2021 spire-util_2.12-0.17.0-M1.jar
-rw-r--r-- 1 fire fire   236660 2月  16 2021 ST4-4.0.4.jar
-rw-r--r-- 1 fire fire    26514 2月  16 2021 stax-api-1.0.1.jar
-rw-r--r-- 1 fire fire    23346 2月  16 2021 stax-api-1.0-2.jar
-rw-r--r-- 1 fire fire   178149 2月  16 2021 stream-2.9.6.jar
-rw-r--r-- 1 fire fire   148627 2月  16 2021 stringtemplate-3.2.1.jar
-rw-r--r-- 1 fire fire    93210 2月  16 2021 super-csv-2.2.0.jar
-rw-r--r-- 1 fire fire   233745 2月  16 2021 threeten-extra-1.5.0.jar
-rw-r--r-- 1 fire fire   443986 2月  16 2021 univocity-parsers-2.9.0.jar
-rw-r--r-- 1 fire fire   281356 2月  16 2021 xbean-asm7-shaded-4.15.jar
-rw-r--r-- 1 fire fire  1386397 2月  16 2021 xercesImpl-2.12.0.jar
-rw-r--r-- 1 fire fire   220536 2月  16 2021 xml-apis-1.4.01.jar
-rw-r--r-- 1 fire fire    15010 2月  16 2021 xmlenc-0.52.jar
-rw-r--r-- 1 fire fire    99555 2月  16 2021 xz-1.5.jar
-rw-r--r-- 1 fire fire    35518 2月  16 2021 zjsonpatch-0.3.0.jar
-rw-r--r-- 1 fire fire   911603 2月  16 2021 zookeeper-3.4.14.jar
-rw-r--r-- 1 fire fire  4210625 2月  16 2021 zstd-jni-1.4.4-3.jar
```



================================================
FILE: docs/dev/integration.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

### 一、编译与安装

```shell
# git clone https://github.com/ZTO-Express/fire.git
# mvn clean install -DskipTests -Pspark-3.0.2 -Pflink-1.14.3 -Pscala-2.12
```

建议将fire deploy到maven私服,便于每个人去使用。编译fire框架时,可根据实际需求编译成指定版本的spark或flink。官方适配的版本是如下:

| Apache Spark | Apache Spark |
| ------------ | ------------ |
| 2.3.x        | 1.10.x       |
| 2.4.x        | 1.11.x       |
| 3.0.x        | 1.12.x       |
| 3.1.x        | 1.13.x       |
| 3.2.x        | 1.14.x       |
| 3.3.x        | 1.15.x       |

### 二、maven依赖

- [spark项目pom样例](pom/spark-pom.xml)
- [flink项目pom样例](pom/flink-pom.xml)

### 三、开发步骤

Fire框架提供了统一的编码风格,基于这种编码风格,可以很轻松的进行spark或flink代码开发。

```scala
package com.zto.fire.examples.flink

import com.zto.fire._
import com.zto.fire.common.anno.Config
import com.zto.fire.core.anno._
import com.zto.fire.flink.BaseFlinkStreaming
import com.zto.fire.flink.anno.Checkpoint

/**
 * 基于Fire进行Flink Streaming开发
 *
 * @contact Fire框架技术交流群(钉钉):35373471
 */
@Config(
  """
    |# 支持Flink调优参数、Fire框架参数、用户自定义参数等
    |state.checkpoints.num-retained=30
    |state.checkpoints.dir=hdfs:///user/flink/checkpoint
    |""")
@Hive("thrift://localhost:9083") // 配置连接到指定的hive
@Checkpoint(interval = 100, unaligned = true) // 100s做一次checkpoint,开启非对齐checkpoint
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object FlinkDemo extends BaseFlinkStreaming {
	
  /** process方法中编写业务逻辑代码,该方法会被fire框架自动调起 **/
  override def process: Unit = {
    val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
    this.fire.sql("""create table statement ...""")
    this.fire.sql("""insert into statement ...""")
    this.fire.start
  }
}
```

从以上代码片段中可以看到,引入fire框架大体分为5个步骤:

#### 3.1 隐式转换

无论是spark还是flink任务,都需要引入以下的隐式转换,该隐式转换提供了众多简单易用的api。

```scala
import com.zto.fire._
```

#### 3.2 继承父类

Fire框架针对不同的引擎、不同的场景提供了对应的父类,用户需要根据实际情况去继承:

##### 3.2.1 spark引擎父类列表:

- **SparkStreaming**:适用于进行Spark Streaming任务的开发
- **BaseSparkCore**:适用于进行Spark批处理任务的开发
- **BaseStructuredStreaming**:适用于进行Spark Structured Streaming任务的开发

##### 3.2.2 flink引擎父类列表:

- **BaseFlinkStreaming**:适用于进行flink流式计算任务的开发
- **BaseFlinkBatch**:适用于进行flink批处理任务的开发

#### 3.3 业务逻辑

Fire父类中统一约定了process方法,该方法会被fire框架自动调用,用户无需在代码中主动调用该方法。process方法作为业务逻辑的聚集地,是业务逻辑的开始。

```scala
override def process: Unit = {
    val dstream = this.fire.createKafkaDirectStream()
    dstream.print
    // 提交streaming任务
    this.fire.start
}
```

***说明:**Fire框架无需编写main方法,无需主动初始化sparksession或flink的environment等对象。这些会被fire框架自动初始化完成,开发者只需在代码中使用this.的方式引用即可。如果有spark或flink调优参数,可以直接复制到@Config注解中,这些调优参数会在fire框架初始化spark或flink引擎上下文时自动生效。*



================================================
FILE: docs/feature.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->





================================================
FILE: docs/highlight/checkpoint.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# Flink Checkpoint动态调优

  Flink作为有状态的流式计算引擎,周期性的checkpoint至关重要。checkpoint的周期不宜设置过长或过短,针对不同的任务要区别对待。甚至针对同一个任务,在不同场景下checkpoint过程也会因为**超时**或**反压**等原因导致失败。下面先来看一下传统checkpoint调优所面临的问题:

## 一、传统checkpoint调优痛点

  Flink checkpoint**速率**、频率、**超时时间**参数等直接影响了任务的健康度。当flink任务重启时,会因消息积压导致任务反压,任务反压反过来会促使checkpoint变慢甚至是超时。如此一来,仿佛进入了一个恶性循环。

- **静态调整**:flink任务的checkpoint相关参数,必须在任务运行前提前设置好,运行时是没办法动态调整的
- **影响数据时效**:重启任务调整checkpoint,必然带来消息处理的延迟,对于实时性要求非常高的场景,影响很大
- **加剧反压**:重启任务后,会带来数据消费的滞后性,如果任务本身checkpoint耗时比较长,还会因为反压与同时做checkpoint带来性能进一步的恶化

## 二、基于Fire实现动态调优

  Fire框架为Flink checkpoint提供了增强,可以做到运行时动态调整checkpoint的相关参数,达到不重启任务即可实现动态调优的目的。Flink开发者只需集成[集成Fire框架]([ZTO-Express/fire (github.com)](https://github.com/ZTO-Express/fire)) ,就可以在运行时通过调用Fire框架提供的restful接口,从而实现动态调整checkpoint参数的目的了。

## 三、典型场景

- **大状态任务**

  假设线上有这样一个任务,每秒钟处理的消息量非常大,状态非常大,每次checkpoint耗时在5分钟以上。这个任务如果停止10分钟以上,会导致大量的消息积压,而消息积压导致的反压叠加checkpoint,会进一步影响任务的性能。这个时候,可以临时先将checkpoint周期调大,等反压结束后再调整回之前的checkpoint周期,降低了checkpoint耗时较长带来性能下降的影响。

- **临时调整**

​		不愿停止任务,只是临时性的调整checkpoint周期、超时参数等。

## 四、集成示例

```scala
@Checkpoint(interval = 100, unaligned = true) // 100s做一次checkpoint,开启非对齐checkpoint
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object Demo extends BaseFlinkStreaming {

  override def process: Unit = {
    val dstream = this.fire.createKafkaDirectStream()	// 使用api的方式消费kafka
    this.fire.sql("""create table statement ...""")
    this.fire.sql("""insert into statement ...""")
    this.fire.start
  }
}
```

## 五、动态调整checkpoint参数

集成了Fire框架的flink任务在运行起来以后,可以在flink的webui的Job Manager -> Configuration中查看到restful接口地址:

![fire-restful](../img/fire-restful.png)

找到接口地址以后,通过curl命令调用该接口即可实现动态调优:

```shell
curl -H "Content-Type:application/json" -X POST --data '{"interval":60000,"minPauseBetween": 60000, "timeout": 60000}' http://ip:5753/system/checkpoint
```

效果如下图所示:

![checkpoint动态调优](../img/checkpoint-duration.png)


================================================
FILE: docs/highlight/spark-duration.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# Spark Streaming动态调整批次时间

  Spark Streaming作为微批次流式计算引擎,批次的间隔时间可能是最常被调整和使用的参数之一。批次的间隔较小,实效性较好,但吞吐性能会下降。批次的间隔时间较大,实效性较差,但吞吐性能会提高很多。

## 一、传统Streaming批次时间调整痛点

  对于传统的Spark Streaming批次间隔时间调整,一般来说需要修改代码,重启Spark任务。这种方式比较麻烦,灵活性很差,没办法灵活的应对不同的生产场景。比如说电商大促期间,消息量可能是平日里的3倍以上,这个时候往往需要临时调大计算资源或调大Streaming的批次间隔时间来提高吞吐率。如果任务少还好,任务很多的情况下,就显得非常浪费时间了。

## 二、基于Fire实现动态调优

  Fire框架为Spark Streaming提供了增强,可以做到运行时动态调整Streaming的批次间隔时间,达到不重启任务即可实现动态调优的目的。Spark开发者只需集成[集成Fire框架]([ZTO-Express/fire (github.com)](https://github.com/ZTO-Express/fire)) ,就可以在运行时通过调用Fire框架提供的restful接口,从而实现动态调整批次间隔参数的目的了。

## 三、典型场景

- **提高吞吐率**

​		动态的调大批次间隔时间,以应对数据洪峰,提高Spark Streaming的吞吐率。

- **临时调整**

​		不想停止任务,只是临时性的调整Streaming批次间隔时间等。

## 四、集成示例

```scala
@Streaming(interval = 100, maxRatePerPartition = 100) // 100s一个Streaming batch,并限制消费速率
@Kafka(brokers = "localhost:9092", topics = "fire", groupId = "fire")
object SparkDemo extends BaseSparkStreaming {

  override def process: Unit = {
    val dstream = this.fire.createKafkaDirectStream() 	// 使用api的方式消费kafka
    sql("""select * from xxx""").show()
    this.fire.start
  }
}
```

## 五、动态调整批次时间

集成了Fire框架的Spark Streaming任务在运行起来以后,可以在Spark的webui的Environment中查看到restful接口地址:

![streaming-duration](../img/streaming-duration.png)

找到接口地址以后,通过curl命令调用该接口即可实现动态调优:

```shell
curl -H "Content-Type:application/json"  -X POST --data '{batchDuration: "20",restartSparkContext: "false",stopGracefully: "false"}' http://ip:27466/system/streaming/hotRestart
```

调用上述接口后,只会重启StreamingContext,SparkContext不会被重启。

================================================
FILE: docs/index.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->



## 一、开发手册

### 1.1 开发与发布

#### [1.1.1 框架集成](dev/integration.md)

#### [1.1.2 参数配置](dev/config.md)

#### [1.1.3 集群环境](dev/engine-env.md)

#### [1.1.4 任务发布](dev/deploy-script.md)

### 	1.3 数据源

#### 		[1.3.1 Kafka Connector](connector/kafka.md)

#### 		[1.3.2 RocketMQ Connector](connector/rocketmq.md)

#### 		[1.3.3 Hive Connector](connector/hive.md)

#### 		[1.3.4 HBase Connector](connector/hbase.md)

#### 		[1.3.5 JDBC  Connector](connector/jdbc.md)

#### 		[1.3.6 Oracle  Connector](connector/oracle.md)

#### 		[1.3.7 Clickhouse  Connector](connector/clickhouse.md)

#### 		[1.3.8 ADB  Connector](connector/adb.md)

#### 		[1.3.9 Kudu  Connector](#)

### 	[1.4 累加器](accumulator.md)

### 	[1.5 定时任务](schedule.md)

### 	[1.6 线程池与并发计算](threadpool.md)

### 	[1.7 Spark DataSource增强](datasource.md)

## 二、实时平台建设

### 	[2.1 集成方案](platform.md)

### 	[2.2 内置接口](restful.md)

## 三、配置与调优

### 	[3.1 Fire configuration](properties.md)



================================================
FILE: docs/platform.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->



================================================
FILE: docs/pom/flink-pom.xml
================================================
<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Licensed to the Apache Software Foundation (ASF) under one or more
  ~ contributor license agreements.  See the NOTICE file distributed with
  ~ this work for additional information regarding copyright ownership.
  ~ The ASF licenses this file to You under the Apache License, Version 2.0
  ~ (the "License"); you may not use this file except in compliance with
  ~ the License.  You may obtain a copy of the License at
  ~
  ~    http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.zto.bigdata.flink</groupId>
  <artifactId>flink-demo</artifactId>
  <version>1.0-SNAPSHOT</version>
  <name>${project.artifactId}</name>

  <properties>
    <fire.version>2.3.2-SNAPSHOT</fire.version>
    <hudi.version>0.9.0</hudi.version>
    <maven.scope>compile</maven.scope>
    <scala.binary.version>2.12</scala.binary.version>
    <scala.minor.version>13</scala.minor.version>
    <kafka.version>0.11.0.2</kafka.version>
    <sparkjava.version>2.8.0</sparkjava.version>
    <hadoop.version>2.6.0</hadoop.version>
    <hive.apache.version>1.1.0</hive.apache.version>
    <hive.version>1.1.0</hive.version>
    <hive.group>org.apache.hive</hive.group>
    <hbase.version>1.2.0</hbase.version>
    <impala.jdbc.version>2.5.30</impala.jdbc.version>
    <jackson.version>2.10.5</jackson.version>
    <rocketmq.version>4.8.0</rocketmq.version>
    <mysql.version>5.1.30</mysql.version>
    <guava.version>18.0</guava.version>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <scala.version>${scala.binary.version}.${scala.minor.version}</scala.version>
    <flink.reference>${flink.version}_${scala.binary.version}</flink.reference>
  </properties>

  <profiles>
    <!-- flink profile -->
    <profile>
      <id>flink-1.12.2</id>
      <properties>
        <flink.version>1.12.2</flink.version>
        <flink.major.version>1.12</flink.major.version>
      </properties>
      <dependencies>
        <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
          <version>${flink.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-queryable-state-runtime_${scala.binary.version}</artifactId>
          <version>${flink.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
      </dependencies>
    </profile>

    <profile>
      <id>flink-1.13.0</id>
      <properties>
        <flink.version>1.13.0</flink.version>
        <flink.major.version>1.13</flink.major.version>
      </properties>
      <dependencies>
        <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
          <version>${flink.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-queryable-state-runtime_${scala.binary.version}</artifactId>
          <version>${flink.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
      </dependencies>
    </profile>

    <profile>
      <id>flink-1.14.3</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <properties>
        <flink.version>1.14.3</flink.version>
        <flink.major.version>1.14</flink.major.version>
      </properties>
      <dependencies>
        <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-table-planner_${scala.binary.version}</artifactId>
          <version>${flink.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-queryable-state-runtime</artifactId>
          <version>${flink.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>com.zto.fire</groupId>
          <artifactId>fire-connector-flink-clickhouse_${flink.reference}</artifactId>
          <version>${fire.version}</version>
        </dependency>
      </dependencies>
    </profile>
  </profiles>

  <repositories>
    <repository>
      <id>zto</id>
      <url>http://maven.dev.ztosys.com/nexus/content/groups/public/</url>
      <releases>
        <enabled>true</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
    <repository>
      <id>aliyun</id>
      <url>https://maven.aliyun.com/repository/central</url>
      <releases>
        <enabled>true</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
    <repository>
      <id>central</id>
      <url>https://mirrors.huaweicloud.com/repository/maven/</url>
      <releases>
        <enabled>true</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
  </repositories>

  <dependencies>
    <!-- fire框架相关依赖 -->
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-common_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-core_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-flink_${flink.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-enhance-flink_${flink.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-enhance-arthas_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-hbase_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-jdbc_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-flink-rocketmq_${flink.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-metrics_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>

    <dependency>
      <groupId>com.sparkjava</groupId>
      <artifactId>spark-core</artifactId>
      <version>${sparkjava.version}</version>
    </dependency>

    <!-- flink计算引擎相关依赖 -->
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-java</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-scala_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-clients_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-runtime-web_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-queryable-state-client-java</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-statebackend-rocksdb_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka_${scala.binary.version}</artifactId>
      <version>${kafka.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-table-api-java</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-table-api-scala-bridge_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-table-common</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-connector-hive_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-connector-jdbc_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-json</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-connector-elasticsearch-base_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-hadoop-compatibility_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-table-planner_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>

    <!-- RocketMQ相关依赖 -->
    <dependency>
      <groupId>org.apache.rocketmq</groupId>
      <artifactId>rocketmq-client</artifactId>
      <version>${rocketmq.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.rocketmq</groupId>
      <artifactId>rocketmq-acl</artifactId>
      <version>${rocketmq.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-orc-nohive_${scala.binary.version}</artifactId>
      <version>${flink.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-shaded-hadoop-2-uber</artifactId>
      <version>2.6.5-8.0</version>
      <scope>${maven.scope}</scope>
      <exclusions>
        <exclusion>
          <groupId>javax.servlet</groupId>
          <artifactId>servlet-api</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

    <!-- hive相关依赖 -->
    <dependency>
      <groupId>org.apache.hive</groupId>
      <artifactId>hive-exec</artifactId>
      <version>${hive.apache.version}</version>
      <scope>${maven.scope}</scope>
      <exclusions>
        <exclusion>
          <artifactId>calcite-core</artifactId>
          <groupId>org.apache.calcite</groupId>
        </exclusion>
      </exclusions>
    </dependency>

    <!-- hbase相关依赖 -->
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-common</artifactId>
      <version>${hbase.version}</version>
      <scope>${maven.scope}</scope>
      <exclusions>
        <exclusion>
          <groupId>org.apache.hbase</groupId>
          <artifactId>hbase-client</artifactId>
        </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-server</artifactId>
      <version>${hbase.version}</version>
      <scope>${maven.scope}</scope>
      <exclusions>
        <exclusion>
          <groupId>org.apache.hbase</groupId>
          <artifactId>hbase-client</artifactId>
        </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-client</artifactId>
      <version>${hbase.version}</version>
      <exclusions>
        <exclusion>
          <artifactId>calcite-core</artifactId>
          <groupId>org.apache.calcite</groupId>
        </exclusion>
      </exclusions>
    </dependency>

    <!-- hudi相关依赖 -->
    <!--<dependency>
        <groupId>org.apache.hudi</groupId>
        <artifactId>hudi-flink-bundle_${scala.binary.version}</artifactId>
        <version>${hudi.version}</version>
        <scope>${maven.scope}</scope>
    </dependency>-->

    <dependency>
      <groupId>com.oracle</groupId>
      <artifactId>ojdbc6</artifactId>
      <version>11.2.0.3</version>
      <scope>${maven.scope}</scope>
    </dependency>

    <dependency>
      <groupId>com.google.guava</groupId>
      <artifactId>guava</artifactId>
      <version>${guava.version}</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <!-- ensure that we use JDK 1.6 -->
      <plugin>
        <inherited>true</inherited>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
        </configuration>
      </plugin>

      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <version>2.15.2</version>
        <executions>
          <!-- Run scala compiler in the process-resources phase, so that dependencies
              on scala classes can be resolved later in the (Java) compile phase -->
          <execution>
            <id>scala-compile-first</id>
            <phase>process-resources</phase>
            <goals>
              <goal>compile</goal>
            </goals>
          </execution>

          <!-- Run scala compiler in the process-test-resources phase, so that
              dependencies on scala classes can be resolved later in the (Java) test-compile
              phase -->
          <execution>
            <id>scala-test-compile</id>
            <phase>process-test-resources</phase>
            <goals>
              <goal>testCompile</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
        <executions>
          <!-- Add src/main/scala to source path of Eclipse -->
          <execution>
            <id>add-source</id>
            <phase>generate-sources</phase>
            <goals>
              <goal>add-source</goal>
            </goals>
            <configuration>
              <sources>
                <source>src/main/java</source>
                <source>src/main/scala</source>
                <source>src/main/java-flink-${flink.version}</source>
                <source>src/main/scala-flink-${flink.version}</source>
              </sources>
            </configuration>
          </execution>

          <!-- Add src/test/scala to test source path of Eclipse -->
          <execution>
            <id>add-test-source</id>
            <phase>generate-test-sources</phase>
            <goals>
              <goal>add-test-source</goal>
            </goals>
            <configuration>
              <sources>
                <source>src/test/scala</source>
              </sources>
            </configuration>
          </execution>
        </executions>
      </plugin>

      <!-- to generate Eclipse artifacts for projects mixing Scala and Java -->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-eclipse-plugin</artifactId>
        <version>2.10</version>
        <configuration>
          <downloadSources>true</downloadSources>
          <downloadJavadocs>true</downloadJavadocs>
          <projectnatures>
            <projectnature>org.scala-ide.sdt.core.scalanature</projectnature>
            <projectnature>org.eclipse.jdt.core.javanature</projectnature>
          </projectnatures>
          <buildcommands>
            <buildcommand>org.scala-ide.sdt.core.scalabuilder</buildcommand>
          </buildcommands>
          <classpathContainers>
            <classpathContainer>org.scala-ide.sdt.launching.SCALA_CONTAINER</classpathContainer>
            <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
          </classpathContainers>
          <excludes>
            <!-- in Eclipse, use scala-library, scala-compiler from the SCALA_CONTAINER
                rather than POM <dependency> -->
            <exclude>org.scala-lang:scala-library</exclude>
            <exclude>org.scala-lang:scala-compiler</exclude>
          </excludes>
          <sourceIncludes>
            <sourceInclude>**/*.scala</sourceInclude>
            <sourceInclude>**/*.java</sourceInclude>
          </sourceIncludes>
        </configuration>
      </plugin>

      <!-- When run tests in the test phase, include .java and .scala source
          files -->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.19.1</version>
        <configuration>
          <includes>
            <include>**/*.java</include>
            <include>**/*.scala</include>
          </includes>
        </configuration>
      </plugin>

      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>2.4.2</version>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>shade</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <filters>
            <filter>
              <artifact>*:*</artifact>
              <excludes>
                <exclude>META-INF/*.SF</exclude>
                <exclude>META-INF/*.DSA</exclude>
                <exclude>META-INF/*.RSA</exclude>
              </excludes>
            </filter>
          </filters>
          <finalName>zto-${project.artifactId}-${project.version}</finalName>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>


================================================
FILE: docs/pom/spark-pom.xml
================================================
<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Licensed to the Apache Software Foundation (ASF) under one or more
  ~ contributor license agreements.  See the NOTICE file distributed with
  ~ this work for additional information regarding copyright ownership.
  ~ The ASF licenses this file to You under the Apache License, Version 2.0
  ~ (the "License"); you may not use this file except in compliance with
  ~ the License.  You may obtain a copy of the License at
  ~
  ~    http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.zto.bigdata.spark</groupId>
  <artifactId>spark-demo</artifactId>
  <version>1.0-SNAPSHOT</version>
  <inceptionYear>2008</inceptionYear>

  <properties>
    <maven.scope>compile</maven.scope>
    <fire.version>2.3.2-SNAPSHOT</fire.version>
    <hadoop.version>2.6.0</hadoop.version>
    <hive.version>1.1.0</hive.version>
    <hbase.version>1.2.0</hbase.version>
    <kudu.version>1.4.0</kudu.version>
    <impala.jdbc.version>2.5.30</impala.jdbc.version>
    <jackson.version>2.8.10</jackson.version>
    <guava.version>18.0</guava.version>
    <hudi.version>0.9.0</hudi.version>
    <rocketmq.version>4.8.0</rocketmq.version>

    <scala.binary.version>2.12</scala.binary.version>
    <scala.minor.version>13</scala.minor.version>
    <kafka.version>0.11.0.2</kafka.version>
    <sparkjava.version>2.8.0</sparkjava.version>
    <hive.group>org.apache.hive</hive.group>
    <rocketmq.version>4.8.0</rocketmq.version>
    <rocketmq.external.version>0.0.3</rocketmq.external.version>
    <mysql.version>5.1.49</mysql.version>
    <guava.version>18.0</guava.version>
    <curator.verrsion>2.6.0</curator.verrsion>
    <arthas.version>3.5.4</arthas.version>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <scala.version>${scala.binary.version}.${scala.minor.version}</scala.version>
    <spark.reference>${spark.version}_${scala.binary.version}</spark.reference>
  </properties>

  <repositories>
    <repository>
      <id>zto</id>
      <url>http://maven.dev.ztosys.com/nexus/content/groups/public/</url>
      <releases>
        <enabled>true</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
    <repository>
      <id>aliyun</id>
      <url>https://maven.aliyun.com/repository/central</url>
      <releases>
        <enabled>true</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
    <repository>
      <id>huaweicloud</id>
      <url>https://mirrors.huaweicloud.com/repository/maven/</url>
      <releases>
        <enabled>true</enabled>
      </releases>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
    </repository>
  </repositories>

  <profiles>
    <!-- spark profile -->
    <profile>
      <id>spark-3.0.2</id>
      <properties>
        <spark.version>3.0.2</spark.version>
        <spark.major.version>3.0</spark.major.version>
        <jackson.version>2.10.5</jackson.version>
        <scala.binary.version>2.12</scala.binary.version>
        <scala.minor.version>13</scala.minor.version>
      </properties>
      <dependencies>
        <dependency>
          <groupId>org.apache.spark</groupId>
          <artifactId>spark-avro_${scala.binary.version}</artifactId>
          <version>${spark.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.hudi</groupId>
          <artifactId>hudi-spark3_${scala.binary.version}</artifactId>
          <version>${hudi.version}</version>
        </dependency>
        <dependency>
          <groupId>org.apache.spark</groupId>
          <artifactId>spark-hive_${scala.binary.version}</artifactId>
          <version>${spark.version}</version>
          <scope>${maven.scope}</scope>
          <exclusions>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-common</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-exec</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-metastore</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-serde</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-shims</artifactId>
            </exclusion>
          </exclusions>
        </dependency>
        <dependency>
          <groupId>org.apache.spark</groupId>
          <artifactId>spark-hive-thriftserver_${scala.binary.version}</artifactId>
          <version>${spark.version}</version>
          <scope>${maven.scope}</scope>
          <exclusions>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-cli</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-jdbc</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.hive</groupId>
              <artifactId>hive-beeline</artifactId>
            </exclusion>
          </exclusions>
        </dependency>
        <dependency>
          <groupId>${hive.group}</groupId>
          <artifactId>hive-cli</artifactId>
          <version>${hive.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>${hive.group}</groupId>
          <artifactId>hive-jdbc</artifactId>
          <version>${hive.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>${hive.group}</groupId>
          <artifactId>hive-beeline</artifactId>
          <version>${hive.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>

        <dependency>
          <groupId>${hive.group}</groupId>
          <artifactId>hive-common</artifactId>
          <version>${hive.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>${hive.group}</groupId>
          <artifactId>hive-metastore</artifactId>
          <version>${hive.version}</version>
          <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
          <groupId>${hive.group}</groupId>
          <artifactId>hive-exec</artifactId>
          <version>${hive.version}</version>
          <scope>${maven.scope}</scope>
          <exclusions>
            <exclusion>
              <groupId>org.apache.commons</groupId>
              <artifactId>commons-lang3</artifactId>
            </exclusion>
            <exclusion>
              <groupId>org.apache.spark</groupId>
              <artifactId>spark-core_2.10</artifactId>
            </exclusion>
          </exclusions>
        </dependency>
        <!-- hudi相关依赖 -->
        <dependency>
          <groupId>org.apache.hudi</groupId>
          <artifactId>hudi-spark-bundle_${scala.binary.version}</artifactId>
          <version>${hudi.version}</version>
        </dependency>
        <dependency>
          <groupId>org.apache.hudi</groupId>
          <artifactId>hudi-spark-client</artifactId>
          <version>${hudi.version}</version>
        </dependency>

        <dependency>
          <groupId>org.apache.hudi</groupId>
          <artifactId>hudi-utilities-bundle_2.12</artifactId>
          <version>${hudi.version}</version>
        </dependency>
      </dependencies>
    </profile>

    <profile>
      <id>spark-2.3.2</id>
      <properties>
        <spark.version>2.3.2</spark.version>
        <spark.major.version>2.3</spark.major.version>
        <jackson.version>2.6.7</jackson.version>
        <scala.binary.version>2.11</scala.binary.version>
        <scala.minor.version>8</scala.minor.version>
      </properties>
    </profile>
  </profiles>

  <dependencies>
    <!-- 用于解决spark2.3在local模式下netty冲突问题 -->
    <dependency>
      <groupId>io.netty</groupId>
      <artifactId>netty-all</artifactId>
      <version>${netty.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>

    <!-- Fire框架相关依赖 -->
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-common_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-core_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-spark_${spark.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-enhance-spark_${spark.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-spark-rocketmq_${spark.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-spark-hbase_${spark.reference}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-hbase_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>
    <dependency>
      <groupId>com.zto.fire</groupId>
      <artifactId>fire-connector-jdbc_${scala.binary.version}</artifactId>
      <version>${fire.version}</version>
    </dependency>

    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-compiler</artifactId>
      <version>${scala.version}</version>
    </dependency>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-reflect</artifactId>
      <version>${scala.version}</version>
    </dependency>

    <dependency>
      <groupId>com.fasterxml.jackson.core</groupId>
      <artifactId>jackson-databind</artifactId>
      <version>2.10.0</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>com.fasterxml.jackson.core</groupId>
      <artifactId>jackson-core</artifactId>
      <version>2.10.0</version>
      <scope>${maven.scope}</scope>
    </dependency>

    <!-- spark相关依赖 -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_${scala.binary.version}</artifactId>
      <exclusions>
        <exclusion>
          <groupId>com.esotericsoftware.kryo</groupId>
          <artifactId>kryo</artifactId>
        </exclusion>
      </exclusions>
      <version>${spark.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_${scala.binary.version}</artifactId>
      <version>${spark.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_${scala.binary.version}</artifactId>
      <version>${spark.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-hive_${scala.binary.version}</artifactId>
      <version>${spark.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql-kafka-0-10_${scala.binary.version}</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_${scala.binary.version}</artifactId>
      <version>${spark.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-10_${scala.binary.version}</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <!-- hadoop相关依赖 -->
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>${hadoop.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>${hadoop.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
      <scope>${maven.scope}</scope>
    </dependency>

    <!-- hbase相关依赖 -->
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-common</artifactId>
      <version>${hbase.version}</version>
      <exclusions>
        <exclusion>
          <groupId>org.apache.hbase</groupId>
          <artifactId>hbase-client</artifactId>
        </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-server</artifactId>
      <version>${hbase.version}</version>
      <exclusions>
        <exclusion>
          <groupId>org.apache.hbase</groupId>
          <artifactId>hbase-client</artifactId>
        </exclusion>
      </exclusions>
    </dependency>
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-client</artifactId>
      <version>${hbase.version}</version>
    </dependency>

    <!-- rocketmq相关依赖 -->
    <dependency>
      <groupId>org.apache.rocketmq</groupId>
      <artifactId>rocketmq-client</artifactId>
      <version>${rocketmq.version}</version>
    </dependency>

    <!-- Hudi相关依赖 -->
    <dependency>
      <groupId>org.apache.hudi</groupId>
      <artifactId>hudi-spark-bundle_${scala.binary.version}</artifactId>
      <version>0.7.0</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>ru.yandex.clickhouse</groupId>
      <artifactId>clickhouse-jdbc</artifactId>
      <version>0.2.4</version>
      <scope>${maven.scope}</scope>
    </dependency>
    <dependency>
      <groupId>com.google.guava</groupId>
      <artifactId>guava</artifactId>
      <version>${guava.version}</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <!-- ensure that we use JDK 1.6 -->
      <plugin>
        <inherited>true</inherited>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
        </configuration>
      </plugin>

      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <version>2.15.2</version>
        <executions>
          <!-- Run scala compiler in the process-resources phase, so that dependencies
              on scala classes can be resolved later in the (Java) compile phase -->
          <execution>
            <id>scala-compile-first</id>
            <phase>process-resources</phase>
            <goals>
              <goal>compile</goal>
            </goals>
          </execution>

          <!-- Run scala compiler in the process-test-resources phase, so that
              dependencies on scala classes can be resolved later in the (Java) test-compile
              phase -->
          <execution>
            <id>scala-test-compile</id>
            <phase>process-test-resources</phase>
            <goals>
              <goal>testCompile</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
        <executions>
          <!-- Add src/main/scala to source path of Eclipse -->
          <execution>
            <id>add-source</id>
            <phase>generate-sources</phase>
            <goals>
              <goal>add-source</goal>
            </goals>
            <configuration>
              <sources>
                <source>src/main/java</source>
                <source>src/main/scala</source>
                <source>src/main/java-spark-${spark.version}</source>
                <source>src/main/scala-spark-${spark.version}</source>
              </sources>
            </configuration>
          </execution>

          <!-- Add src/test/scala to test source path of Eclipse -->
          <execution>
            <id>add-test-source</id>
            <phase>generate-test-sources</phase>
            <goals>
              <goal>add-test-source</goal>
            </goals>
            <configuration>
              <sources>
                <source>src/test/scala</source>
              </sources>
            </configuration>
          </execution>
        </executions>
      </plugin>

      <!-- to generate Eclipse artifacts for projects mixing Scala and Java -->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-eclipse-plugin</artifactId>
        <version>2.10</version>
        <configuration>
          <downloadSources>true</downloadSources>
          <downloadJavadocs>true</downloadJavadocs>
          <projectnatures>
            <projectnature>org.scala-ide.sdt.core.scalanature</projectnature>
            <projectnature>org.eclipse.jdt.core.javanature</projectnature>
          </projectnatures>
          <buildcommands>
            <buildcommand>org.scala-ide.sdt.core.scalabuilder</buildcommand>
          </buildcommands>
          <classpathContainers>
            <classpathContainer>org.scala-ide.sdt.launching.SCALA_CONTAINER</classpathContainer>
            <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
          </classpathContainers>
          <excludes>
            <!-- in Eclipse, use scala-library, scala-compiler from the SCALA_CONTAINER
                rather than POM <dependency> -->
            <exclude>org.scala-lang:scala-library</exclude>
            <exclude>org.scala-lang:scala-compiler</exclude>
          </excludes>
          <sourceIncludes>
            <sourceInclude>**/*.scala</sourceInclude>
            <sourceInclude>**/*.java</sourceInclude>
          </sourceIncludes>
        </configuration>
      </plugin>

      <!-- When run tests in the test phase, include .java and .scala source
          files -->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.19.1</version>
        <configuration>
          <includes>
            <include>**/*.java</include>
            <include>**/*.scala</include>
          </includes>
        </configuration>
      </plugin>

      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>2.4.2</version>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>shade</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <filters>
            <filter>
              <artifact>*:*</artifact>
              <excludes>
                <exclude>META-INF/*.SF</exclude>
                <exclude>META-INF/*.DSA</exclude>
                <exclude>META-INF/*.RSA</exclude>
              </excludes>
            </filter>
          </filters>
          <finalName>zto-${project.artifactId}-${project.version}</finalName>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>

================================================
FILE: docs/properties.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# fire框架参数

  Fire框架提供了很多参数,这些参数为个性化调优带来了很大的灵活性。参数大体分为:**fire框架参数**(*fire.properties*)、**spark引擎参数**(*spark.properties*)、**flink引擎参数**(*flink.properties*)、**kafka参数**、**hbase参数**等。详见以下列表:

# 一、fire框架参数

| 参数                                                | 默认值              | 含义                                                         | 生效版本 | 是否废弃 |
| --------------------------------------------------- | ------------------- | ------------------------------------------------------------ | -------- | -------- |
| fire.thread.pool.size                               | 5                   | fire内置线程池大小                                           | 0.4.0    | 否       |
| fire.thread.pool.schedule.size                      | 5                   | fire内置定时任务线程池大小                                   | 0.4.0    | 否       |
| fire.rest.enable                                    | true                | 用于配置是否启用fire框架内置的restful服务,可用于与平台系统做集成。 | 0.3.0    | 否       |
| fire.conf.show.enable                               | true                | 是否打印非敏感的配置信息                                     | 0.1.0    | 否       |
| fire.rest.url.show.enable                           | false               | 是否在日志中打印fire框架restful服务地址                      | 0.3.0    | 否       |
| fire.rest.url.hostname                              | false               | 是否启用hostname作为rest服务的访问地址                       | 2.0.0    | 否       |
| fire.acc.enable                                     | true                | 是否启用fire框架内置的所有累加器                             | 0.4.0    | 否       |
| fire.acc.log.enable                                 | true                | 是否启用fire框架日志累加器                                   | 0.4.0    | 否       |
| fire.acc.multi.counter.enable                       | true                | 是否启用多值累加器                                           | 0.4.0    | 否       |
| fire.acc.multi.timer.enable                         | true                | 是否启用时间维度累加器                                       | 0.4.0    | 否       |
| fire.log.enable                                     | true                | fire框架埋点日志开关,关闭以后将不再打印埋点日志             | 0.4.0    | 否       |
| fire.log.sql.length                                 | 100                 | 用于限定fire框架中sql日志的字符串长度                        | 0.4.1    | 否       |
| fire.jdbc.storage.level                             | memory_and_disk_ser | fire框架针对jdbc操作后数据集的缓存策略,避免重复查询数据库   | 0.4.0    | 否       |
| fire.jdbc.query.partitions                          | 10                  | 通过JdbcConnector查询后将数据集放到多少个分区中,需根据实际的结果集做配置 | 0.3.0    | 否       |
| fire.task.schedule.enable                           | true                | 是否启用fire框架定时任务,基于quartz实现                     | 0.4.0    | 否       |
| fire.dynamic.conf.enable                            | true                | 是否启用动态配置功能,fire框架允许在运行时更新用户配置信息,比如:rdd.repartition(this.conf.getInt(count)),此处可实现动态的改变分区大小,实现动态调优。 | 0.4.0    | 否       |
| fire.restful.max.thread                             | 8                   | fire框架rest接口服务最大线程数,如果平台调用fire接口比较频繁,建议调大。 | 0.4.0    | 否       |
| fire.quartz.max.thread                              | 8                   | quartz最大线程池大小,如果任务中的定时任务比较多,建议调大。 | 0.4.0    | 否       |
| fire.acc.log.min.size                               | 500                 | 收集日志记录保留的最小条数。                                 | 0.4.0    | 否       |
| fire.acc.log.max.size                               | 1000                | 收集日志记录保留的最大条数。                                 | 0.4.0    | 否       |
| fire.acc.timer.max.size                             | 1000                | timer累加器保留最大的记录数                                  | 0.4.0    | 否       |
| fire.acc.timer.max.hour                             | 12                  | timer累加器清理几小时之前的记录                              | 0.4.0    | 否       |
| fire.acc.env.enable                                 | true                | env累加器开关                                                | 0.4.0    | 否       |
| fire.acc.env.max.size                               | 500                 | env累加器保留最多的记录数                                    | 0.4.0    | 否       |
| fire.acc.env.min.size                               | 100                 | env累加器保留最少的记录数                                    | 0.4.0    | 否       |
| fire.scheduler.blacklist                            |                     | 定时调度任务黑名单,配置的value为定时任务方法名,多个以逗号分隔,配置黑名单的方法将不会被quartz定时调度。 | 0.4.1    | 否       |
| fire.conf.print.blacklist                           | .map.,pass,secret   | 配置打印黑名单,含有配置中指定的片段将不会被打印,也不会被展示到spark&flink的webui中。 | 0.4.2    | 否       |
| fire.restful.port.retry_num                         | 3                   | 启用fire restserver可能会因为端口冲突导致失败,通过该参数可允许fire重试几次。 | 1.0.0    | 否       |
| fire.restful.port.retry_duration                    | 1000                | 端口重试间隔时间(ms)                                       | 1.0.0    | 否       |
| fire.log.level.conf.org.apache.spark                | info                | 用于设置某个包的日志级别,默认将spark包所有的类日志级别设置为info | 1.0.0    | 否       |
| fire.deploy_conf.enable                             | true                | 是否进行累加器的分布式初始化                                 | 0.4.0    | 否       |
| fire.exception_bus.size                             | 1000                | 用于限制每个jvm实例内部queue用于存放异常对象数最大大小,避免队列过大造成内存溢出 | 2.0.0    | 否       |
| fire.buried_point.datasource.enable                 | true                | 是否开启数据源埋点,开启后fire将自动采集任务用到的数据源信息(kafka、jdbc、hbase、hive等)。 | 2.0.0    | 否       |
| fire.buried_point.datasource.max.size               | 200                 | 用于存放埋点的队列最大大小,超过该大小将会被丢弃             | 2.0.0    | 否       |
| fire.buried_point.datasource.initialDelay           | 30                  | 定时解析埋点SQL的初始延迟(s)                               | 2.0.0    | 否       |
| fire.buried_point.datasource.period                 | 60                  | 定时解析埋点SQL的执行频率(s)                               | 2.0.0    | 否       |
| fire.buried_point.datasource.map.tidb               | 4000                | 用于jdbc url的识别,当无法通过driver class识别数据源时,将从url中的端口号进行区分,不同数据配置使用统一的前缀:fire.buried_point.datasource.map. | 2.0.0    | 否       |
| fire.conf.adaptive.prefix                           | true                | 是否开启配置自适应前缀,自动为配置加上引擎前缀(spark.\|flink.) | 2.0.0    | 否       |
| fire.user.common.conf                               | common.properties   | 用户统一配置文件,允许用户在该配置文件中存放公共的配置信息,优先级低于任务配置文件(多个以逗号分隔) | 2.0.0    | 否       |
| fire.shutdown.auto.exit                             | true                | 是否在调用shutdown方法时主动退出jvm进程,如果为true,则执行到this.stop方法,关闭上下文信息,回收线程池后将调用System.exit(0)强制退出进程。 | 2.0.0    | 否       |
| fire.kafka.cluster.map.test                         | ip1:9092,ip2:9092   | kafka集群名称与集群地址映射,便于用户配置中通过别名即可消费指定的kafka。比如:kafka.brokers.name=test则表明消费ip1:9092,ip2:9092这个kafka集群。当然,也支持直接配置url:kafka.brokers.name=ip1:9092,ip2:9092。 | 0.1.0    | 否       |
| fire.hive.default.database.name                     | tmp                 | 默认的hive数据库                                             | 0.1.0    | 否       |
| fire.hive.table.default.partition.name              | ds                  | 默认的hive分区字段名称                                       | 0.1.0    | 否       |
| fire.hive.cluster.map.test                          | thrift://ip:9083    | 测试集群hive metastore地址(别名:test),任务中就可以通过fire.hive.cluster=test这种配置方式指定连接test对应的thrift server地址。 |          |          |
| fire.hbase.batch.size                               | 10000               | 单个线程读写HBase的数据量                                    | 0.1.0    | 否       |
| fire.hbase.storage.level                            | memory_and_disk_ser | fire框架针对hbase操作后数据集的缓存策略,避免因懒加载或其他原因导致的重复读取hbase问题,降低hbase压力。 | 0.3.2    | 否       |
| fire.hbase.scan.partitions                          | -1                  | 通过HBase scan后repartition的分区数,需根据scan后的数据量做配置,-1表示不生效。 | 0.3.2    | 否       |
| fire.hbase.table.exists.cache.enable                | true                | 是否开启HBase表存在判断的缓存,开启后表存在判断将避免大量的connection消耗 | 2.0.0    | 否       |
| fire.hbase.table.exists.cache.reload.enable         | true                | 是否开启HBase表存在列表缓存的定时更新任务,避免hbase表被drop导致报错。 | 2.0.0    | 否       |
| fire.hbase.table.exists.cache.initialDelay          | 60                  | 定时刷新缓存HBase表任务的初始延迟(s)                       | 2.0.0    | 否       |
| fire.hbase.table.exists.cache.period                | 600                 | 定时刷新缓存HBase表任务的执行频率(s)                       | 2.0.0    | 否       |
| fire.hbase.cluster.map.test                         | zk1:2181,zk2:2181   | 测试集群hbase的zk地址(别名:test)                          | 2.0.0    | 否       |
| fire.hbase.conf.hbase.zookeeper.property.clientPort | 2181                | hbase connection 配置,约定以:fire.hbase.conf.开头,比如:fire.hbase.conf.hbase.rpc.timeout对应hbase中的配置为hbase.rpc.timeout | 2.0.0    | 否       |
| fire.config_center.enable                           | true                | 是否在任务启动时从配置中心获取配置文件,以便实现动态覆盖jar包中的配置信息。 | 1.0.0    | 否       |
| fire.config_center.local.enable                     | false               | 本地运行环境下(Windows、Mac)是否调用配置中心接口获取配置信息。 | 1.0.0    | 否       |
| fire.config_center.register.conf.secret             |                     | 配置中心接口调用秘钥                                         | 1.0.0    | 否       |
| fire.config_center.register.conf.prod.address       |                     | 配置中心接口地址                                             | 0.4.1    | 否       |

# 二、Spark引擎参数

| 参数                                                         | 默认值                                                       | 含义                                                         | 生效版本 | 是否废弃 |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------- | -------- |
| spark.appName                                                |                                                              | spark的应用名称,为空则取默认获取类名                        | 0.1.0    | 否       |
| spark.local.cores                                            | *                                                            | spark local模式下使用多少core运行,默认为local[*],自动根据当前pc的cpu核心数设置 | 0.4.1    | 否       |
| spark.chkpoint.dir                                           |                                                              | spark checkpoint目录地址                                     | 0.1.0    | 否       |
| spark.log.level                                              | WARN                                                         | spark的日志级别                                              | 0.1.0    | 否       |
| spark.fire.scheduler.blacklist                               | jvmMonitor                                                   | 定时任务黑名单,指定到@Scheduled所修饰的方法名,多个以逗号分隔。当配置了黑名单后,该定时任务将不会被定时调用。 | 0.4.0    | 否       |
| spark.kafka.group.id                                         | 指定spark消费kafka的groupId                                  | kafka的groupid,为空则取类名                                 | 0.1.0    | 否       |
| spark.kafka.brokers.name                                     |                                                              | 用于配置任务消费的kafka broker地址,如果通过fire.kafka.cluster.map.xxx指定了broker别名,则此处也可以填写别名。 | 0.1.0    | 否       |
| spark.kafka.topics                                           |                                                              | 消费的topic列表,多个以逗号分隔                              | 0.1.0    | 否       |
| spark.kafka.starting.offsets                                 | latest                                                       | 用于配置启动时的消费位点,默认取最新                         | 0.1.0    | 否       |
| spark.kafka.failOnDataLoss                                   | true                                                         | 数据丢失时执行失败                                           | 0.1.0    | 否       |
| spark.kafka.enable.auto.commit                               | false                                                        | 是否启用自动commit kafka的offset                             | 0.4.0    | 否       |
| spark.kafka.conf.xxx                                         |                                                              | 以spark.kafka.conf开头加上kafka参数,则可用于设置kafka相关的参数。比如:spark.kafka.conf.request.timeout.ms对应kafka的request.timeout.ms参数。 | 0.4.0    | 否       |
| spark.hive.cluster                                           |                                                              | 用于配置spark连接的hive thriftserver地址,支持url和别名两种配置方式。别名需要事先通过fire.hive.cluster.map.别名                                                    =       thrift://ip:9083指定。 | 0.1.0    | 否       |
| spark.rocket.cluster.map.别名                                | ip:9876                                                      | rocketmq别名列表                                             | 1.0.0    | 否       |
| spark.rocket.conf.xxx                                        |                                                              | 以spark.rocket.conf开头的配置支持所有rocket client的配置     | 1.0.0    | 否       |
| spark.hdfs.ha.enable                                         | true                                                         | 是否启用hdfs的ha配置,避免将hdfs-site.xml、core-site.xml放到resources中导致多hadoop集群hdfs不灵活的问题。同时也可以避免引namenode维护导致spark任务挂掉的问题。 | 1.0.0    | 否       |
| spark.hdfs.ha.conf.test.fs.defaultFS                         | hdfs://nameservice1                                          | 对应fs.defaultFS,其中test与fire.hive.cluster.map.test中指定的别名test相对应,当通过fire.hive.cluster=test指定读写test这个hive时,namenode的ha将生效。 | 1.0.0    | 否       |
| spark.hdfs.ha.conf.test.dfs.nameservices                     | nameservice1                                                 | 对应dfs.nameservices                                         | 1.0.0    | 否       |
| spark.hdfs.ha.conf.test.dfs.ha.namenodes.nameservice1        | namenode5231,namenode5229                                    | 对应dfs.ha.namenodes.nameservice1                            | 1.0.0    | 否       |
| spark.hdfs.ha.conf.test.dfs.namenode.rpc-address.nameservice1.namenode5231 | ip:8020                                                      | 对应dfs.namenode.rpc-address.nameservice1.namenode5231       | 1.0.0    | 否       |
| spark.hdfs.ha.conf.test.dfs.namenode.rpc-address.nameservice1.namenode5229 | ip2:8020                                                     | 对应dfs.namenode.rpc-address.nameservice1.namenode5229       | 1.0.0    | 否       |
| spark.hdfs.ha.conf.test.dfs.client.failover.proxy.provider.nameservice1 | org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider | 对应dfs.client.failover.proxy.provider.nameservice1          | 1.0.0    | 否       |
| spark.impala.connection.url                                  | jdbc:hive2://ip:21050/;auth=noSasl                           | impala jdbc地址                                              | 0.1.0    | 否       |
| spark.impala.jdbc.driver.class.name                          | org.apache.hive.jdbc.HiveDriver                              | impala jdbc驱动                                              | 0.1.0    | 否       |
| spark.datasource.options.                                    |                                                              | 以此开头的配置将被加载到datasource api的options中            | 2.0.0    | 否       |
| spark.datasource.format                                      |                                                              | datasource api的format                                       | 2.0.0    | 否       |
| spark.datasource.saveMode                                    | Append                                                       | datasource api的saveMode                                     | 2.0.0    | 否       |
| spark.datasource.saveParam                                   |                                                              | 用于dataFrame.write.format.save()参数                        | 2.0.0    | 否       |
| spark.datasource.isSaveTable                                 | false                                                        | 用于决定调用save(path)还是saveAsTable                        | 2.0.0    | 否       |
| spark.datasource.loadParam                                   |                                                              | 用于spark.read.format.load()参数                             | 2.0.0    | 否       |

# 三、Flink引擎参数

| 参数                                             | 默认值                                                       | 含义                                                         | 生效版本 | 是否废弃 |
| ------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------- | -------- |
| flink.appName                                    |                                                              | flink的应用名称,为空则取类名                                | 1.0.0    | 否       |
| flink.kafka.group.id                             |                                                              | kafka的groupid,为空则取类名                                 | 1.0.0    | 否       |
| flink.kafka.brokers.name                         |                                                              | 用于配置任务消费的kafka broker地址,如果通过fire.kafka.cluster.map.xxx指定了broker别名,则此处也可以填写别名。 | 1.0.0    | 否       |
| flink.kafka.topics                               |                                                              | 消费的kafka topic列表,多个以逗号分隔                        | 1.0.0    | 否       |
| flink.kafka.starting.offsets                     |                                                              | 用于配置启动时的消费位点,默认取最新                         | 1.0.0    | 否       |
| flink.kafka.failOnDataLoss                       | true                                                         | 数据丢失时执行失败                                           | 1.0.0    | 否       |
| flink.kafka.enable.auto.commit                   | false                                                        | 是否启用自动提交kafka offset                                 | 1.0.0    | 否       |
| flink.kafka.CommitOffsetsOnCheckpoints           | true                                                         | 是否在checkpoint时记录offset值                               | 1.0.0    | 否       |
| flink.kafka.StartFromTimestamp                   | 0                                                            | 设置从指定时间戳位置开始消费kafka                            | 1.0.0    | 否       |
| flink.kafka.StartFromGroupOffsets                | false                                                        | 从topic中指定的group上次消费的位置开始消费,必须配置group.id参数 | 1.0.0    | 否       |
| flink.log.level                                  | WARN                                                         | 默认的日志级别                                               | 1.0.0    | 否       |
| flink.hive.cluster                               |                                                              | 用于配置flink读写的hive集群别名                              | 1.0.0    | 否       |
| flink.hive.version                               |                                                              | 指定hive版本号                                               | 1.0.0    | 否       |
| flink.default.database.name                      | tmp                                                          | 默认的hive数据库                                             | 1.0.0    | 否       |
| flink.default.table.partition.name               | ds                                                           | 默认的hive分区字段名称                                       | 1.0.0    | 否       |
| flink.hive.catalog.name                          | hive                                                         | hive的catalog名称                                            | 1.0.0    | 否       |
| flink.fire.hive.site.path.map.别名               | test                                                         | /path/to/hive-site-path/                                     | 1.0.0    | 否       |
| flink.hbase.cluster                              | test                                                         | 读写的hbase集群zk地址                                        | 1.0.0    | 否       |
| flink.max.parallelism                            |                                                              | 用于配置flink的max parallelism                               | 1.0.0    | 否       |
| flink.default.parallelism                        |                                                              | 用于配置任务默认的parallelism                                | 1.0.0    | 否       |
| flink.stream.checkpoint.interval                 | -1                                                           | checkpoint频率,-1表示关闭                                   | 1.0.0    | 否       |
| flink.stream.checkpoint.mode                     | EXACTLY_ONCE                                                 | checkpoint的模式:EXACTLY_ONCE/AT_LEAST_ONCE                 | 1.0.0    | 否       |
| flink.stream.checkpoint.timeout                  | 600000                                                       | checkpoint超时时间,单位:毫秒                               | 1.0.0    | 否       |
| flink.stream.checkpoint.max.concurrent           | 1                                                            | 同时checkpoint操作的并发数                                   | 1.0.0    | 否       |
| flink.stream.checkpoint.min.pause.between        | 0                                                            | 两次checkpoint的最小停顿时间                                 | 1.0.0    | 否       |
| flink.stream.checkpoint.prefer.recovery          | false                                                        | 如果有更近的checkpoint时,是否将作业回退到该检查点           | 1.0.0    | 否       |
| flink.stream.checkpoint.tolerable.failure.number | 0                                                            | 可容忍checkpoint失败的次数,默认不允许失败                   | 1.0.0    | 否       |
| flink.stream.checkpoint.externalized             | RETAIN_ON_CANCELLATION                                       | 当cancel job时保留checkpoint                                 | 1.0.0    | 否       |
| flink.sql.log.enable                             | false                                                        | 是否打印组装with语句后的flink sql,由于with表达式中可能含有敏感信息,默认为关闭 | 2.0.0    | 否       |
| flink.sql.with.xxx                               | flink.sql.with.connector=jdbc flink.sql.with.url=jdbc:mysql://ip:3306/db | 以flink.sql.with.开头的配置,用于sql语句的with表达式。通过this.fire.sql(sql, keyNum)即可自动读取并映射成with表达式的sql。避免sql中的with表达式硬编码到代码中,提高灵活性。 | 2.0.0    | 否       |
| flink.sql_with.replaceMode.enable                | false                                                        | 是否启用配置文件中with强制替换sql中已有的with表达式,如果启用,则会强制替换掉代码中sql的with列表,达到最大的灵活性。 | 2.0.0    | 否       |
| flink.sql.udf.fireUdf.enable                     | false                                                        | 是否启用fire注册外部udf jar包中的类为发flink sql的udf函数    | 2.0.0    | 否       |
| flink.sql.conf.pipeline.jars                     | /path/to/udf/jar/                                            | 用于指定udf jar包路径                                        | 2.0.0    | 否       |
| flink.sql.udf.conf.xxx                           | 包名+类名                                                    | 用于指定udf函数名称与类名的对应关系,比如函数名为test,包名为com.udf.Udf,则配置为:flink.sql.udf.conf.test=com.udf.Udf | 2.0.0    | 否       |



================================================
FILE: docs/restful.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# fire内置的restful接口

​		fire框架在提供丰富好用的api给开发者的同时,也提供了大量的restful接口给大数据实时计算平台。通过对外暴露的restful接口,可以将每个任务与实时平台进行深入绑定,为平台建设提供了更大的想象空间。其中包括:**实时热重启接口、动态批次时间调整接口、sql在线调试接口**、**Arthas诊断jvm**、**实时血缘分析**等。

| 引擎      | 接口                         | 含义                                                         |
| --------- | ---------------------------- | ------------------------------------------------------------ |
| 通用      | /system/kill                 | 用于kill 任务自身。                                          |
| 通用      | /system/cancelJob            | 生产环境中,通常会禁用掉spark webui的kill功能,但有时任务owner有kill的需求,为了满足此类需求,fire通过接口的方式将kill功能暴露给平台,由平台控制权限并完成kill job的触发。 |
| 通用      | /system/cancelStage          | 同job的kill功能,该接口用于kill指定的stage。                 |
| 通用      | /system/sql                  | 该接口允许用户传递sql给spark任务执行,可用于sql的动态调试,支持在任务开发阶段spark临时表与hive表的关联,降低sql开发的人力成本。 |
| 通用      | /system/sparkInfo            | 用户获取当前spark任务的配置信息。                            |
| 通用      | /system/counter              | 用于获取累加器的值。                                         |
| 通用      | /system/multiCounter         | 用于获取多值累加器的值。                                     |
| 通用      | /system/multiTimer           | 用于获取时间维度多值累加器的值。                             |
| 通用      | /system/log                  | 用于获取日志信息,平台可调用该接口获取日志并进行日志展示。   |
| 通用      | /system/env                  | 获取运行时状态信息,包括GC、jvm、thread、memory、cpu等       |
| 通用      | /system/listDatabases        | 用于列举当前spark任务catalog中所有的数据库,包括hive库等。   |
| 通用      | /system/listTables           | 用于列举指定库下所有的表信息。                               |
| 通用      | /system/listColumns          | 用于列举某张表的所有字段信息。                               |
| spark通用 | /system/listFunctions        | 用于列举当前任务支持的函数。                                 |
| 通用      | /system/setConf              | 用于配置热覆盖,在运行时动态修改指定的配置信息。比如动态修改spark streaming某个rdd的分区数,实现动态调优的目的。 |
| 通用      | /system/datasource           | 用于获取当前任务使用到的数据源信息、表信息等。支持jdbc、hbase、kafka、hive等众多组件,可用于和平台集成,做实时血缘关系。 |
| spark     | /system/streaming/hotRestart | spark streaming热重启接口,可以动态的修改运行中的spark streaming的批次时间。 |
| flink     | /system/checkpoint           | 用于运行时热修改checkpoint                                   |
| 通用      | /system/arthas               | 动态开启或关闭arthas服务,用于运行时分析诊断jvm              |



================================================
FILE: docs/schedule.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# 定时任务

  Fire框架内部进一步封装了quart进行定时任务的声明与调度,使用方法和spring的@Scheduled注解类似。参考:[示例程序](../fire-examples/spark-examples/src/main/scala/com/zto/fire/examples/spark/schedule/ScheduleTest.scala)。基于该功能,可以很容易实现诸如定时加载与更新维表等功能,十分方便。

```scala
  /**
   * 声明了@Scheduled注解的方法是定时任务方法,会周期性执行
   *
   * @cron cron表达式
   * @scope 默认同时在driver端和executor端执行,如果指定了driver,则只在driver端定时执行
   * @concurrent 上一个周期定时任务未执行完成时是否允许下一个周期任务开始执行
   * @startAt 用于指定第一次开始执行的时间
   * @initialDelay 延迟多长时间开始执行第一次定时任务
   */
  @Scheduled(cron = "0/5 * * * * ?", scope = "driver", concurrent = false, startAt = "2021-01-21 11:30:00", initialDelay = 60000)
  def loadTable: Unit = {
    this.logger.info("更新维表动作")
  }

  /**
   * 只在driver端执行,不允许同一时刻同时执行该方法
   * startAt用于指定首次执行时间
   */
  @Scheduled(cron = "0/5 * * * * ?", scope = "all", concurrent = false)
  def test2: Unit = {
    this.logger.info("executorId=" + SparkUtils.getExecutorId + "=方法 test2() 每5秒执行" +                      DateFormatUtils.formatCurrentDateTime())
  }

  // 每天凌晨4点01将锁标志设置为false,这样下一个批次就可以先更新维表再执行sql
  @Scheduled(cron = "0 1 4 * * ?")
  def updateTableJob: Unit = this.lock.compareAndSet(true, false)
```

**注:**目前定时任务不支持flink任务在每个TaskManager端执行。



================================================
FILE: docs/threadpool.md
================================================
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

# 线程池与并发计算

集成Fire后,可以很简单的在程序内部进行多个任务的提交,充分榨干申请到的资源。

```scala
/**
  * 在driver中启用线程池的示例
  * 1. 开启子线程执行一个任务
  * 2. 开启子线程执行周期性任务
  */
object ThreadTest extends BaseSparkStreaming {

  def main(args: Array[String]): Unit = {
    // 第二个参数为true表示开启checkPoint机制
    this.init(10L, false)
  }

  /**
    * Streaming的处理过程强烈建议放到process中,保持风格统一
    * 注:此方法会被自动调用,在以下两种情况下,必须将逻辑写在process中
    * 1. 开启checkpoint
    * 2. 支持streaming热重启(可在不关闭streaming任务的前提下修改batch时间)
    */
  override def process: Unit = {
    // 第一次执行时延迟两分钟,每隔1分钟执行一次showSchema函数
    this.runAsSchedule(this.showSchema, 1, 1)
    // 以子线程方式执行print方法中的逻辑
    this.runAsThread(this.print)

    val dstream = this.fire.createKafkaDirectStream()
    dstream.foreachRDD(rdd => {
      println("count--> " + rdd.count())
    })

    this.fire.start
  }

  /**
    * 以子线程方式执行一次
    */
  def print: Unit = {
    println("==========子线程执行===========")
  }

  /**
    * 查看表结构信息
    */
  def showSchema: Unit = {
    println(s"${DateFormatUtils.formatCurrentDateTime()}--------> atFixRate <----------")
    this.fire.sql("use tmp")
    this.fire.sql("show tables").show(false)
  }
}
```


================================================
FILE: fire-common/pom.xml
================================================
<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Licensed to the Apache Software Foundation (ASF) under one or more
  ~ contributor license agreements.  See the NOTICE file distributed with
  ~ this work for additional information regarding copyright ownership.
  ~ The ASF licenses this file to You under the Apache License, Version 2.0
  ~ (the "License"); you may not use this file except in compliance with
  ~ the License.  You may obtain a copy of the License at
  ~
  ~    http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <artifactId>fire-common_${scala.binary.version}</artifactId>
    <packaging>jar</packaging>
    <name>Fire : Common </name>

    <parent>
        <groupId>com.zto.fire</groupId>
        <artifactId>fire-parent</artifactId>
        <version>2.3.2-SNAPSHOT</version>
        <relativePath>../pom.xml</relativePath>
    </parent>

    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_${scala.binary.version}</artifactId>
            <version>${kafka.version}</version>
            <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.rocketmq</groupId>
            <artifactId>rocketmq-client</artifactId>
            <version>${rocketmq.version}</version>
            <scope>${maven.scope}</scope>
        </dependency>
        <dependency>
            <groupId>commons-httpclient</groupId>
            <artifactId>commons-httpclient</artifactId>
            <version>3.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpclient</artifactId>
            <version>4.3.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.httpcomponents</groupId>
            <artifactId>httpcore</artifactId>
            <version>4.4.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.htrace</groupId>
            <artifactId>htrace-core</artifactId>
            <version>3.2.0-incubating</version>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>8</source>
                    <target>8</target>
                </configuration>
            </plugin>
        </plugins>
        <resources>
            <resource>
                <directory>src/main/resources</directory>
                <filtering>true</filtering>
            </resource>
        </resources>
    </build>
</project>


================================================
FILE: fire-common/src/main/java/com/zto/fire/common/anno/Config.java
================================================
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.zto.fire.common.anno;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
 * 基于注解进行任务的配置,支持纯注解方式进行任务的参数配置以及指定多个配置文件
 *
 * @author ChengLong 2021-8-3 10:49:30
 * @since 2.1.1
 */
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface Config {
    /**
     * 配置文件名称列表
     */
    String[] files() default "";

    /**
     * 配置项列表,key=value的字符串形式
     */
    String[] props() default "";

    /**
     * 配置的字符串
     */
    String value() default "";
}

================================================
FILE: fire-common/src/main/java/com/zto/fire/common/anno/FieldName.java
================================================
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package com.zto.fire.common.anno;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

/**
 * 用于标识该field对应数据库中的名称
 * Created by ChengLong on 2017-03-15.
 */
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.FIELD})
public @interface FieldName {
    /**
     * fieldName,映射到hbase中作为qualifier名称
     */
    String value() default "";

    /**
     * 列族名称
     */
    String family() default "info";

    /**
     * 不使用该字段,默认为使用
     */
    boolean disuse() default false;

    /**
     * 是否可以为空
     */
    boolean nullable() default true;

    /**
     * 是否为主键字段
     * @return
     */
    boolean id() default false;

    /**
     * HBase表的命名空间
     */
    String namespace() default "default";

    /**
     * 字段注释
     */
    String comment() default "";
}


================================================
FILE: fire-common/src/main/java/com/zto/fire/common/anno/FireConf.java
================================================
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
Download .txt
gitextract_kfsomxpy/

├── .gitignore
├── LICENSE
├── README.md
├── docs/
│   ├── accumulator.md
│   ├── anno.md
│   ├── connector/
│   │   ├── adb.md
│   │   ├── clickhouse.md
│   │   ├── hbase.md
│   │   ├── hive.md
│   │   ├── jdbc.md
│   │   ├── kafka.md
│   │   ├── oracle.md
│   │   └── rocketmq.md
│   ├── datasource.md
│   ├── dev/
│   │   ├── config.md
│   │   ├── deploy-script.md
│   │   ├── engine-env.md
│   │   └── integration.md
│   ├── feature.md
│   ├── highlight/
│   │   ├── checkpoint.md
│   │   └── spark-duration.md
│   ├── index.md
│   ├── platform.md
│   ├── pom/
│   │   ├── flink-pom.xml
│   │   └── spark-pom.xml
│   ├── properties.md
│   ├── restful.md
│   ├── schedule.md
│   └── threadpool.md
├── fire-common/
│   ├── pom.xml
│   └── src/
│       ├── main/
│       │   ├── java/
│       │   │   └── com/
│       │   │       └── zto/
│       │   │           └── fire/
│       │   │               └── common/
│       │   │                   ├── anno/
│       │   │                   │   ├── Config.java
│       │   │                   │   ├── FieldName.java
│       │   │                   │   ├── FireConf.java
│       │   │                   │   ├── Internal.java
│       │   │                   │   ├── Rest.java
│       │   │                   │   ├── Scheduled.java
│       │   │                   │   └── TestStep.java
│       │   │                   ├── bean/
│       │   │                   │   ├── FireTask.java
│       │   │                   │   ├── analysis/
│       │   │                   │   │   └── ExceptionMsg.java
│       │   │                   │   ├── config/
│       │   │                   │   │   └── ConfigurationParam.java
│       │   │                   │   ├── lineage/
│       │   │                   │   │   ├── Lineage.java
│       │   │                   │   │   ├── SQLLineage.java
│       │   │                   │   │   ├── SQLTable.java
│       │   │                   │   │   ├── SQLTableColumns.java
│       │   │                   │   │   ├── SQLTablePartitions.java
│       │   │                   │   │   └── SQLTableRelations.java
│       │   │                   │   ├── rest/
│       │   │                   │   │   ├── ResultMsg.java
│       │   │                   │   │   └── yarn/
│       │   │                   │   │       └── App.java
│       │   │                   │   └── runtime/
│       │   │                   │       ├── ClassLoaderInfo.java
│       │   │                   │       ├── CpuInfo.java
│       │   │                   │       ├── DiskInfo.java
│       │   │                   │       ├── DisplayInfo.java
│       │   │                   │       ├── HardwareInfo.java
│       │   │                   │       ├── JvmInfo.java
│       │   │                   │       ├── MemoryInfo.java
│       │   │                   │       ├── NetworkInfo.java
│       │   │                   │       ├── OSInfo.java
│       │   │                   │       ├── RuntimeInfo.java
│       │   │                   │       ├── ThreadInfo.java
│       │   │                   │       └── UsbInfo.java
│       │   │                   ├── enu/
│       │   │                   │   ├── ConfigureLevel.java
│       │   │                   │   ├── Datasource.java
│       │   │                   │   ├── ErrorCode.java
│       │   │                   │   ├── JdbcDriver.java
│       │   │                   │   ├── JobType.java
│       │   │                   │   ├── Operation.java
│       │   │                   │   ├── RequestMethod.scala
│       │   │                   │   ├── ThreadPoolType.java
│       │   │                   │   └── YarnState.java
│       │   │                   ├── exception/
│       │   │                   │   ├── FireException.java
│       │   │                   │   ├── FireFlinkException.java
│       │   │                   │   └── FireSparkException.java
│       │   │                   └── util/
│       │   │                       ├── EncryptUtils.java
│       │   │                       ├── FileUtils.java
│       │   │                       ├── FindClassUtils.java
│       │   │                       ├── HttpClientUtils.java
│       │   │                       ├── IOUtils.java
│       │   │                       ├── MathUtils.java
│       │   │                       ├── OSUtils.java
│       │   │                       ├── ProcessUtil.java
│       │   │                       ├── ReflectionUtils.java
│       │   │                       ├── StringsUtils.java
│       │   │                       ├── UnitFormatUtils.java
│       │   │                       └── YarnUtils.java
│       │   ├── resources/
│       │   │   └── log4j.properties
│       │   └── scala/
│       │       └── com/
│       │           └── zto/
│       │               └── fire/
│       │                   └── common/
│       │                       ├── bean/
│       │                       │   └── TableIdentifier.scala
│       │                       ├── conf/
│       │                       │   ├── FireConf.scala
│       │                       │   ├── FireFrameworkConf.scala
│       │                       │   ├── FireHDFSConf.scala
│       │                       │   ├── FireHiveConf.scala
│       │                       │   ├── FireKafkaConf.scala
│       │                       │   ├── FirePS1Conf.scala
│       │                       │   ├── FireRocketMQConf.scala
│       │                       │   └── KeyNum.scala
│       │                       ├── ext/
│       │                       │   ├── JavaExt.scala
│       │                       │   └── ScalaExt.scala
│       │                       ├── package.scala
│       │                       └── util/
│       │                           ├── ConfigurationCenterManager.scala
│       │                           ├── DateFormatUtils.scala
│       │                           ├── ExceptionBus.scala
│       │                           ├── FireFunctions.scala
│       │                           ├── FireUtils.scala
│       │                           ├── JSONUtils.scala
│       │                           ├── JavaTypeMap.scala
│       │                           ├── KafkaUtils.scala
│       │                           ├── LineageManager.scala
│       │                           ├── LogUtils.scala
│       │                           ├── Logging.scala
│       │                           ├── MQProducer.scala
│       │                           ├── NumberFormatUtils.scala
│       │                           ├── PropUtils.scala
│       │                           ├── RegularUtils.scala
│       │                           ├── SQLLineageManager.scala
│       │                           ├── SQLUtils.scala
│       │                           ├── ScalaUtils.scala
│       │                           ├── ShutdownHookManager.scala
│       │                           ├── ThreadUtils.scala
│       │                           ├── Tools.scala
│       │                           └── ValueUtils.scala
│       └── test/
│           └── scala/
│               └── com/
│                   └── zto/
│                       └── fire/
│                           └── common/
│                               └── util/
│                                   ├── RegularUtilsUnitTest.scala
│                                   ├── SQLUtilsTest.scala
│                                   ├── ShutdownHookManagerTest.scala
│                                   └── ValueUtilsTest.scala
├── fire-connectors/
│   ├── .gitignore
│   ├── base-connectors/
│   │   ├── fire-hbase/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── java/
│   │   │           │   └── com/
│   │   │           │       └── zto/
│   │   │           │           └── fire/
│   │   │           │               └── hbase/
│   │   │           │                   └── anno/
│   │   │           │                       └── HConfig.java
│   │   │           └── scala/
│   │   │               └── com/
│   │   │                   └── zto/
│   │   │                       └── fire/
│   │   │                           └── hbase/
│   │   │                               ├── HBaseConnector.scala
│   │   │                               ├── HBaseFunctions.scala
│   │   │                               ├── bean/
│   │   │                               │   ├── HBaseBaseBean.java
│   │   │                               │   └── MultiVersionsBean.java
│   │   │                               ├── conf/
│   │   │                               │   └── FireHBaseConf.scala
│   │   │                               └── utils/
│   │   │                                   └── HBaseUtils.scala
│   │   ├── fire-jdbc/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── resources/
│   │   │           │   └── driver.properties
│   │   │           └── scala/
│   │   │               └── com/
│   │   │                   └── zto/
│   │   │                       └── fire/
│   │   │                           └── jdbc/
│   │   │                               ├── JdbcConnector.scala
│   │   │                               ├── JdbcConnectorBridge.scala
│   │   │                               ├── JdbcFunctions.scala
│   │   │                               ├── conf/
│   │   │                               │   └── FireJdbcConf.scala
│   │   │                               └── util/
│   │   │                                   └── DBUtils.scala
│   │   └── pom.xml
│   ├── flink-connectors/
│   │   ├── flink-clickhouse/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── java-flink-1.14/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── flink/
│   │   │           │               └── connector/
│   │   │           │                   └── clickhouse/
│   │   │           │                       ├── ClickHouseDynamicTableFactory.java
│   │   │           │                       ├── ClickHouseDynamicTableSink.java
│   │   │           │                       ├── ClickHouseDynamicTableSource.java
│   │   │           │                       ├── catalog/
│   │   │           │                       │   ├── ClickHouseCatalog.java
│   │   │           │                       │   └── ClickHouseCatalogFactory.java
│   │   │           │                       ├── config/
│   │   │           │                       │   ├── ClickHouseConfig.java
│   │   │           │                       │   └── ClickHouseConfigOptions.java
│   │   │           │                       ├── internal/
│   │   │           │                       │   ├── AbstractClickHouseInputFormat.java
│   │   │           │                       │   ├── AbstractClickHouseOutputFormat.java
│   │   │           │                       │   ├── ClickHouseBatchInputFormat.java
│   │   │           │                       │   ├── ClickHouseBatchOutputFormat.java
│   │   │           │                       │   ├── ClickHouseShardInputFormat.java
│   │   │           │                       │   ├── ClickHouseShardOutputFormat.java
│   │   │           │                       │   ├── ClickHouseStatementFactory.java
│   │   │           │                       │   ├── common/
│   │   │           │                       │   │   └── DistributedEngineFullSchema.java
│   │   │           │                       │   ├── connection/
│   │   │           │                       │   │   └── ClickHouseConnectionProvider.java
│   │   │           │                       │   ├── converter/
│   │   │           │                       │   │   ├── ClickHouseConverterUtils.java
│   │   │           │                       │   │   └── ClickHouseRowConverter.java
│   │   │           │                       │   ├── executor/
│   │   │           │                       │   │   ├── ClickHouseBatchExecutor.java
│   │   │           │                       │   │   ├── ClickHouseExecutor.java
│   │   │           │                       │   │   └── ClickHouseUpsertExecutor.java
│   │   │           │                       │   ├── options/
│   │   │           │                       │   │   ├── ClickHouseConnectionOptions.java
│   │   │           │                       │   │   ├── ClickHouseDmlOptions.java
│   │   │           │                       │   │   └── ClickHouseReadOptions.java
│   │   │           │                       │   └── partitioner/
│   │   │           │                       │       ├── BalancedPartitioner.java
│   │   │           │                       │       ├── ClickHousePartitioner.java
│   │   │           │                       │       ├── HashPartitioner.java
│   │   │           │                       │       └── ShufflePartitioner.java
│   │   │           │                       ├── split/
│   │   │           │                       │   ├── ClickHouseBatchBetweenParametersProvider.java
│   │   │           │                       │   ├── ClickHouseBetweenParametersProvider.java
│   │   │           │                       │   ├── ClickHouseParametersProvider.java
│   │   │           │                       │   ├── ClickHouseShardBetweenParametersProvider.java
│   │   │           │                       │   └── ClickHouseShardTableParametersProvider.java
│   │   │           │                       └── util/
│   │   │           │                           ├── ClickHouseTypeUtil.java
│   │   │           │                           ├── ClickHouseUtil.java
│   │   │           │                           ├── FilterPushDownHelper.java
│   │   │           │                           └── SqlClause.java
│   │   │           └── resources/
│   │   │               └── META-INF/
│   │   │                   └── services/
│   │   │                       └── org.apache.flink.table.factories.Factory
│   │   ├── flink-es/
│   │   │   └── pom.xml
│   │   ├── flink-rocketmq/
│   │   │   ├── pom.xml
│   │   │   └── src/
│   │   │       └── main/
│   │   │           ├── java/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQConfig.java
│   │   │           │                   ├── RocketMQSink.java
│   │   │           │                   ├── RocketMQSinkWithTag.java
│   │   │           │                   ├── RocketMQSource.java
│   │   │           │                   ├── RocketMQUtils.java
│   │   │           │                   ├── RunningChecker.java
│   │   │           │                   └── common/
│   │   │           │                       ├── selector/
│   │   │           │                       │   ├── DefaultTopicSelector.java
│   │   │           │                       │   ├── SimpleTopicSelector.java
│   │   │           │                       │   └── TopicSelector.java
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonSerializationSchema.java
│   │   │           │                           ├── KeyValueDeserializationSchema.java
│   │   │           │                           ├── KeyValueSerializationSchema.java
│   │   │           │                           ├── SimpleKeyValueDeserializationSchema.java
│   │   │           │                           ├── SimpleKeyValueSerializationSchema.java
│   │   │           │                           └── TagKeyValueSerializationSchema.java
│   │   │           ├── java-flink-1.12/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQSourceWithTag.java
│   │   │           │                   └── common/
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonDeserializationSchema.java
│   │   │           │                           ├── SimpleTagKeyValueDeserializationSchema.java
│   │   │           │                           └── TagKeyValueDeserializationSchema.java
│   │   │           ├── java-flink-1.13/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQSourceWithTag.java
│   │   │           │                   └── common/
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonDeserializationSchema.java
│   │   │           │                           ├── SimpleTagKeyValueDeserializationSchema.java
│   │   │           │                           └── TagKeyValueDeserializationSchema.java
│   │   │           ├── java-flink-1.14/
│   │   │           │   └── org/
│   │   │           │       └── apache/
│   │   │           │           └── rocketmq/
│   │   │           │               └── flink/
│   │   │           │                   ├── RocketMQSourceWithTag.java
│   │   │           │                   └── common/
│   │   │           │                       └── serialization/
│   │   │           │                           ├── JsonDeserializationSchema.java
│   │   │           │                           ├── SimpleTagKeyValueDeserializationSchema.java
│   │   │           │                           └── TagKeyValueDeserializationSchema.java
│   │   │           ├── resources/
│   │   │           │   └── META-INF/
│   │   │           │       └── services/
│   │   │           │           └── org.apache.flink.table.factories.Factory
│   │   │           └── scala/
│   │   │               └── com/
│   │   │                   └── zto/
│   │   │                       └── fire/
│   │   │                           └── flink/
│   │   │                               └── sql/
│   │   │                                   └── connector/
│   │   │                                       └── rocketmq/
│   │   │                                           ├── RocketMQDynamicTableFactory.scala
│   │   │                                           ├── RocketMQDynamicTableSink.scala
│   │   │                                           ├── RocketMQDynamicTableSource.scala
│   │   │                                           └── RocketMQOptions.scala
│   │   └── pom.xml
│   ├── pom.xml
│   └── spark-connectors/
│       ├── pom.xml
│       ├── spark-hbase/
│       │   ├── pom.xml
│       │   └── src/
│       │       └── main/
│       │           ├── java/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           └── hadoop/
│       │           │               └── hbase/
│       │           │                   ├── client/
│       │           │                   │   ├── ConnFactoryExtend.java
│       │           │                   │   └── ConnectionFactory.java
│       │           │                   └── spark/
│       │           │                       ├── SparkSQLPushDownFilter.java
│       │           │                       ├── example/
│       │           │                       │   └── hbasecontext/
│       │           │                       │       ├── JavaHBaseBulkDeleteExample.java
│       │           │                       │       ├── JavaHBaseBulkGetExample.java
│       │           │                       │       ├── JavaHBaseBulkPutExample.java
│       │           │                       │       ├── JavaHBaseDistributedScan.java
│       │           │                       │       ├── JavaHBaseMapGetPutExample.java
│       │           │                       │       └── JavaHBaseStreamingBulkPutExample.java
│       │           │                       └── protobuf/
│       │           │                           └── generated/
│       │           │                               └── FilterProtos.java
│       │           ├── protobuf/
│       │           │   └── Filter.proto
│       │           ├── scala/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           └── hadoop/
│       │           │               └── hbase/
│       │           │                   └── spark/
│       │           │                       ├── BulkLoadPartitioner.scala
│       │           │                       ├── ByteArrayComparable.scala
│       │           │                       ├── ByteArrayWrapper.scala
│       │           │                       ├── ColumnFamilyQualifierMapKeyWrapper.scala
│       │           │                       ├── DefaultSource.scala
│       │           │                       ├── DynamicLogicExpression.scala
│       │           │                       ├── FamiliesQualifiersValues.scala
│       │           │                       ├── FamilyHFileWriteOptions.scala
│       │           │                       ├── HBaseContext.scala
│       │           │                       ├── HBaseDStreamFunctions.scala
│       │           │                       ├── HBaseRDDFunctions.scala
│       │           │                       ├── JavaHBaseContext.scala
│       │           │                       ├── KeyFamilyQualifier.scala
│       │           │                       ├── NewHBaseRDD.scala
│       │           │                       ├── datasources/
│       │           │                       │   ├── Bound.scala
│       │           │                       │   ├── HBaseResources.scala
│       │           │                       │   ├── HBaseSparkConf.scala
│       │           │                       │   ├── SerializableConfiguration.scala
│       │           │                       │   └── package.scala
│       │           │                       └── example/
│       │           │                           ├── hbasecontext/
│       │           │                           │   ├── HBaseBulkDeleteExample.scala
│       │           │                           │   ├── HBaseBulkGetExample.scala
│       │           │                           │   ├── HBaseBulkPutExample.scala
│       │           │                           │   ├── HBaseBulkPutExampleFromFile.scala
│       │           │                           │   ├── HBaseBulkPutTimestampExample.scala
│       │           │                           │   ├── HBaseDistributedScanExample.scala
│       │           │                           │   └── HBaseStreamingBulkPutExample.scala
│       │           │                           └── rdd/
│       │           │                               ├── HBaseBulkDeleteExample.scala
│       │           │                               ├── HBaseBulkGetExample.scala
│       │           │                               ├── HBaseBulkPutExample.scala
│       │           │                               ├── HBaseForeachPartitionExample.scala
│       │           │                               └── HBaseMapPartitionExample.scala
│       │           ├── scala-spark-2.3/
│       │           │   └── apache/
│       │           │       └── hadoop/
│       │           │           └── hbase/
│       │           │               └── spark/
│       │           │                   └── datasources/
│       │           │                       └── HBaseTableScanRDD.scala
│       │           ├── scala-spark-2.4/
│       │           │   └── apache/
│       │           │       └── hadoop/
│       │           │           └── hbase/
│       │           │               └── spark/
│       │           │                   └── datasources/
│       │           │                       └── HBaseTableScanRDD.scala
│       │           ├── scala-spark-3.0/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           ├── hadoop/
│       │           │           │   └── hbase/
│       │           │           │       └── spark/
│       │           │           │           └── datasources/
│       │           │           │               └── HBaseTableScanRDD.scala
│       │           │           └── spark/
│       │           │               └── deploy/
│       │           │                   └── SparkHadoopUtil.scala
│       │           ├── scala-spark-3.1/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           ├── hadoop/
│       │           │           │   └── hbase/
│       │           │           │       └── spark/
│       │           │           │           └── datasources/
│       │           │           │               └── HBaseTableScanRDD.scala
│       │           │           └── spark/
│       │           │               └── deploy/
│       │           │                   └── SparkHadoopUtil.scala
│       │           ├── scala-spark-3.2/
│       │           │   └── org/
│       │           │       └── apache/
│       │           │           ├── hadoop/
│       │           │           │   └── hbase/
│       │           │           │       └── spark/
│       │           │           │           └── datasources/
│       │           │           │               └── HBaseTableScanRDD.scala
│       │           │           └── spark/
│       │           │               └── deploy/
│       │           │                   └── SparkHadoopUtil.scala
│       │           └── scala-spark-3.3/
│       │               └── org/
│       │                   └── apache/
│       │                       ├── hadoop/
│       │                       │   └── hbase/
│       │                       │       └── spark/
│       │                       │           └── datasources/
│       │                       │               └── HBaseTableScanRDD.scala
│       │                       └── spark/
│       │                           └── deploy/
│       │                               └── SparkHadoopUtil.scala
│       └── spark-rocketmq/
│           ├── pom.xml
│           └── src/
│               └── main/
│                   ├── java/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── rocketmq/
│                   │               └── spark/
│                   │                   ├── OffsetCommitCallback.java
│                   │                   ├── RocketMQConfig.java
│                   │                   ├── TopicQueueId.java
│                   │                   └── streaming/
│                   │                       ├── DefaultMessageRetryManager.java
│                   │                       ├── MessageRetryManager.java
│                   │                       ├── MessageSet.java
│                   │                       ├── ReliableRocketMQReceiver.java
│                   │                       └── RocketMQReceiver.java
│                   ├── scala/
│                   │   └── org/
│                   │       └── apache/
│                   │           ├── rocketmq/
│                   │           │   └── spark/
│                   │           │       ├── CachedMQConsumer.scala
│                   │           │       ├── ConsumerStrategy.scala
│                   │           │       ├── LocationStrategy.scala
│                   │           │       ├── Logging.scala
│                   │           │       ├── OffsetRange.scala
│                   │           │       ├── RocketMqRDDPartition.scala
│                   │           │       └── RocketMqUtils.scala
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── CachedRocketMQConsumer.scala
│                   │               │       ├── CachedRocketMQProducer.scala
│                   │               │       ├── JsonUtils.scala
│                   │               │       ├── RocketMQConf.scala
│                   │               │       ├── RocketMQOffsetRangeLimit.scala
│                   │               │       ├── RocketMQOffsetReader.scala
│                   │               │       ├── RocketMQRelation.scala
│                   │               │       ├── RocketMQSink.scala
│                   │               │       ├── RocketMQSourceProvider.scala
│                   │               │       ├── RocketMQUtils.scala
│                   │               │       ├── RocketMQWriteTask.scala
│                   │               │       └── RocketMQWriter.scala
│                   │               └── streaming/
│                   │                   └── MQPullInputDStream.scala
│                   ├── scala-spark-2.3/
│                   │   ├── org/
│                   │   │   └── apache/
│                   │   │       └── spark/
│                   │   │           └── sql/
│                   │   │               └── rocketmq/
│                   │   │                   ├── RocketMQSource.scala
│                   │   │                   ├── RocketMQSourceOffset.scala
│                   │   │                   └── RocketMQSourceRDDOffsetRange.scala
│                   │   └── org.apache.spark.streaming/
│                   │       └── RocketMqRDD.scala
│                   ├── scala-spark-2.4/
│                   │   ├── org/
│                   │   │   └── apache/
│                   │   │       └── spark/
│                   │   │           └── sql/
│                   │   │               └── rocketmq/
│                   │   │                   ├── RocketMQSource.scala
│                   │   │                   ├── RocketMQSourceOffset.scala
│                   │   │                   └── RocketMQSourceRDDOffsetRange.scala
│                   │   └── org.apache.spark.streaming/
│                   │       └── RocketMqRDD.scala
│                   ├── scala-spark-3.0/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── RocketMQSource.scala
│                   │               │       ├── RocketMQSourceOffset.scala
│                   │               │       └── RocketMQSourceRDD.scala
│                   │               └── streaming/
│                   │                   └── RocketMqRDD.scala
│                   ├── scala-spark-3.1/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── RocketMQSource.scala
│                   │               │       ├── RocketMQSourceOffset.scala
│                   │               │       └── RocketMQSourceRDD.scala
│                   │               └── streaming/
│                   │                   └── RocketMqRDD.scala
│                   ├── scala-spark-3.2/
│                   │   └── org/
│                   │       └── apache/
│                   │           └── spark/
│                   │               ├── sql/
│                   │               │   └── rocketmq/
│                   │               │       ├── RocketMQSource.scala
│                   │               │       ├── RocketMQSourceOffset.scala
│                   │               │       └── RocketMQSourceRDD.scala
│                   │               └── streaming/
│                   │                   └── RocketMqRDD.scala
│                   └── scala-spark-3.3/
│                       └── org/
│                           └── apache/
│                               └── spark/
│                                   ├── sql/
│                                   │   └── rocketmq/
│                                   │       ├── RocketMQSource.scala
│                                   │       ├── RocketMQSourceOffset.scala
│                                   │       └── RocketMQSourceRDD.scala
│                                   └── streaming/
│                                       └── RocketMqRDD.scala
├── fire-core/
│   ├── pom.xml
│   └── src/
│       └── main/
│           ├── java/
│           │   └── com/
│           │       └── zto/
│           │           └── fire/
│           │               └── core/
│           │                   ├── TimeCost.java
│           │                   ├── anno/
│           │                   │   ├── connector/
│           │                   │   │   ├── HBase.java
│           │                   │   │   ├── HBase2.java
│           │                   │   │   ├── HBase3.java
│           │                   │   │   ├── HBase4.java
│           │                   │   │   ├── HBase5.java
│           │                   │   │   ├── Hive.java
│           │                   │   │   ├── Jdbc.java
│           │                   │   │   ├── Jdbc2.java
│           │                   │   │   ├── Jdbc3.java
│           │                   │   │   ├── Jdbc4.java
│           │                   │   │   ├── Jdbc5.java
│           │                   │   │   ├── Kafka.java
│           │                   │   │   ├── Kafka2.java
│           │                   │   │   ├── Kafka3.java
│           │                   │   │   ├── Kafka4.java
│           │                   │   │   ├── Kafka5.java
│           │                   │   │   ├── RocketMQ.java
│           │                   │   │   ├── RocketMQ2.java
│           │                   │   │   ├── RocketMQ3.java
│           │                   │   │   ├── RocketMQ4.java
│           │                   │   │   └── RocketMQ5.java
│           │                   │   └── lifecycle/
│           │                   │       ├── After.java
│           │                   │       ├── Before.java
│           │                   │       ├── Handle.java
│           │                   │       ├── Process.java
│           │                   │       ├── Step1.java
│           │                   │       ├── Step10.java
│           │                   │       ├── Step11.java
│           │                   │       ├── Step12.java
│           │                   │       ├── Step13.java
│           │                   │       ├── Step14.java
│           │                   │       ├── Step15.java
│           │                   │       ├── Step16.java
│           │                   │       ├── Step17.java
│           │                   │       ├── Step18.java
│           │                   │       ├── Step19.java
│           │                   │       ├── Step2.java
│           │                   │       ├── Step3.java
│           │                   │       ├── Step4.java
│           │                   │       ├── Step5.java
│           │                   │       ├── Step6.java
│           │                   │       ├── Step7.java
│           │                   │       ├── Step8.java
│           │                   │       └── Step9.java
│           │                   ├── bean/
│           │                   │   └── ArthasParam.java
│           │                   └── task/
│           │                       ├── SchedulerManager.java
│           │                       ├── TaskRunner.java
│           │                       └── TaskRunnerQueue.java
│           ├── resources/
│           │   ├── cluster.properties
│           │   └── fire.properties
│           └── scala/
│               └── com/
│                   └── zto/
│                       └── fire/
│                           └── core/
│                               ├── Api.scala
│                               ├── BaseFire.scala
│                               ├── conf/
│                               │   └── AnnoManager.scala
│                               ├── connector/
│                               │   └── Connector.scala
│                               ├── ext/
│                               │   ├── BaseFireExt.scala
│                               │   └── Provider.scala
│                               ├── plugin/
│                               │   ├── ArthasDynamicLauncher.scala
│                               │   ├── ArthasLauncher.scala
│                               │   └── ArthasManager.scala
│                               ├── rest/
│                               │   ├── RestCase.scala
│                               │   ├── RestServerManager.scala
│                               │   └── SystemRestful.scala
│                               ├── sql/
│                               │   ├── SqlExtensionsParser.scala
│                               │   └── SqlParser.scala
│                               ├── sync/
│                               │   ├── LineageAccumulatorManager.scala
│                               │   ├── SyncEngineConf.scala
│                               │   └── SyncManager.scala
│                               ├── task/
│                               │   └── FireInternalTask.scala
│                               └── util/
│                                   └── SingletonFactory.scala
├── fire-engines/
│   ├── .gitignore
│   ├── fire-flink/
│   │   ├── .gitignore
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── flink/
│   │           │                   ├── anno/
│   │           │                   │   ├── Checkpoint.java
│   │           │                   │   ├── FlinkConf.java
│   │           │                   │   └── Streaming.java
│   │           │                   ├── bean/
│   │           │                   │   ├── CheckpointParams.java
│   │           │                   │   ├── DistributeBean.java
│   │           │                   │   └── FlinkTableSchema.java
│   │           │                   ├── enu/
│   │           │                   │   └── DistributeModule.java
│   │           │                   ├── ext/
│   │           │                   │   └── watermark/
│   │           │                   │       └── FirePeriodicWatermarks.java
│   │           │                   ├── sink/
│   │           │                   │   ├── BaseSink.scala
│   │           │                   │   ├── HBaseSink.scala
│   │           │                   │   └── JdbcSink.scala
│   │           │                   └── task/
│   │           │                       └── FlinkSchedulerManager.java
│   │           ├── resources/
│   │           │   ├── META-INF/
│   │           │   │   └── services/
│   │           │   │       └── org.apache.flink.table.factories.Factory
│   │           │   ├── flink-batch.properties
│   │           │   ├── flink-streaming.properties
│   │           │   └── flink.properties
│   │           ├── scala/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           ├── fire/
│   │           │           │   └── flink/
│   │           │           │       ├── BaseFlink.scala
│   │           │           │       ├── BaseFlinkBatch.scala
│   │           │           │       ├── BaseFlinkCore.scala
│   │           │           │       ├── BaseFlinkStreaming.scala
│   │           │           │       ├── FlinkBatch.scala
│   │           │           │       ├── FlinkCore.scala
│   │           │           │       ├── FlinkStreaming.scala
│   │           │           │       ├── acc/
│   │           │           │       │   └── MultiCounterAccumulator.scala
│   │           │           │       ├── conf/
│   │           │           │       │   ├── FireFlinkConf.scala
│   │           │           │       │   └── FlinkAnnoManager.scala
│   │           │           │       ├── ext/
│   │           │           │       │   ├── batch/
│   │           │           │       │   │   ├── BatchExecutionEnvExt.scala
│   │           │           │       │   │   ├── BatchTableEnvExt.scala
│   │           │           │       │   │   └── DataSetExt.scala
│   │           │           │       │   ├── function/
│   │           │           │       │   │   ├── RichFunctionExt.scala
│   │           │           │       │   │   └── RuntimeContextExt.scala
│   │           │           │       │   ├── provider/
│   │           │           │       │   │   ├── HBaseConnectorProvider.scala
│   │           │           │       │   │   └── JdbcFlinkProvider.scala
│   │           │           │       │   └── stream/
│   │           │           │       │       ├── DataStreamExt.scala
│   │           │           │       │       ├── KeyedStreamExt.scala
│   │           │           │       │       ├── RowExt.scala
│   │           │           │       │       ├── SQLExt.scala
│   │           │           │       │       ├── StreamExecutionEnvExt.scala
│   │           │           │       │       ├── TableEnvExt.scala
│   │           │           │       │       ├── TableExt.scala
│   │           │           │       │       └── TableResultImplExt.scala
│   │           │           │       ├── plugin/
│   │           │           │       │   └── FlinkArthasLauncher.scala
│   │           │           │       ├── rest/
│   │           │           │       │   └── FlinkSystemRestful.scala
│   │           │           │       ├── sql/
│   │           │           │       │   ├── FlinkSqlExtensionsParser.scala
│   │           │           │       │   └── FlinkSqlParserBase.scala
│   │           │           │       ├── sync/
│   │           │           │       │   ├── DistributeSyncManager.scala
│   │           │           │       │   ├── FlinkLineageAccumulatorManager.scala
│   │           │           │       │   └── SyncFlinkEngine.scala
│   │           │           │       ├── task/
│   │           │           │       │   └── FlinkInternalTask.scala
│   │           │           │       └── util/
│   │           │           │           ├── FlinkSingletonFactory.scala
│   │           │           │           ├── FlinkUtils.scala
│   │           │           │           ├── HivePartitionTimeExtractor.scala
│   │           │           │           ├── RocketMQUtils.scala
│   │           │           │           └── StateCleanerUtils.scala
│   │           │           └── fire.scala
│   │           ├── scala-flink-1.12/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── flink/
│   │           │                   └── sql/
│   │           │                       └── FlinkSqlParser.scala
│   │           ├── scala-flink-1.13/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── flink/
│   │           │                   └── sql/
│   │           │                       └── FlinkSqlParser.scala
│   │           └── scala-flink-1.14/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── flink/
│   │                               └── sql/
│   │                                   └── FlinkSqlParser.scala
│   ├── fire-spark/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   ├── anno/
│   │           │                   │   ├── SparkConf.java
│   │           │                   │   ├── Streaming.java
│   │           │                   │   └── StreamingDuration.java
│   │           │                   ├── bean/
│   │           │                   │   ├── ColumnMeta.java
│   │           │                   │   ├── FunctionMeta.java
│   │           │                   │   ├── GenerateBean.java
│   │           │                   │   ├── RestartParams.java
│   │           │                   │   ├── SparkInfo.java
│   │           │                   │   └── TableMeta.java
│   │           │                   └── task/
│   │           │                       └── SparkSchedulerManager.java
│   │           ├── resources/
│   │           │   ├── spark-core.properties
│   │           │   ├── spark-streaming.properties
│   │           │   ├── spark.properties
│   │           │   └── structured-streaming.properties
│   │           ├── scala/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           ├── fire/
│   │           │           │   └── spark/
│   │           │           │       ├── BaseSpark.scala
│   │           │           │       ├── BaseSparkBatch.scala
│   │           │           │       ├── BaseSparkCore.scala
│   │           │           │       ├── BaseSparkStreaming.scala
│   │           │           │       ├── BaseStructuredStreaming.scala
│   │           │           │       ├── SparkBatch.scala
│   │           │           │       ├── SparkCore.scala
│   │           │           │       ├── SparkStreaming.scala
│   │           │           │       ├── StructuredStreaming.scala
│   │           │           │       ├── acc/
│   │           │           │       │   ├── AccumulatorManager.scala
│   │           │           │       │   ├── EnvironmentAccumulator.scala
│   │           │           │       │   ├── LineageAccumulator.scala
│   │           │           │       │   ├── LogAccumulator.scala
│   │           │           │       │   ├── MultiCounterAccumulator.scala
│   │           │           │       │   ├── MultiTimerAccumulator.scala
│   │           │           │       │   ├── StringAccumulator.scala
│   │           │           │       │   └── SyncAccumulator.scala
│   │           │           │       ├── conf/
│   │           │           │       │   ├── FireSparkConf.scala
│   │           │           │       │   └── SparkAnnoManager.scala
│   │           │           │       ├── connector/
│   │           │           │       │   ├── BeanGenReceiver.scala
│   │           │           │       │   ├── DataGenReceiver.scala
│   │           │           │       │   ├── HBaseBulkConnector.scala
│   │           │           │       │   ├── HBaseBulkFunctions.scala
│   │           │           │       │   └── HBaseSparkBridge.scala
│   │           │           │       ├── ext/
│   │           │           │       │   ├── core/
│   │           │           │       │   │   ├── DStreamExt.scala
│   │           │           │       │   │   ├── DataFrameExt.scala
│   │           │           │       │   │   ├── DatasetExt.scala
│   │           │           │       │   │   ├── RDDExt.scala
│   │           │           │       │   │   ├── SQLContextExt.scala
│   │           │           │       │   │   ├── SparkConfExt.scala
│   │           │           │       │   │   ├── SparkContextExt.scala
│   │           │           │       │   │   ├── SparkSessionExt.scala
│   │           │           │       │   │   └── StreamingContextExt.scala
│   │           │           │       │   └── provider/
│   │           │           │       │       ├── HBaseBulkProvider.scala
│   │           │           │       │       ├── HBaseConnectorProvider.scala
│   │           │           │       │       ├── HBaseHadoopProvider.scala
│   │           │           │       │       ├── JdbcSparkProvider.scala
│   │           │           │       │       ├── KafkaSparkProvider.scala
│   │           │           │       │       ├── SparkProvider.scala
│   │           │           │       │       └── SqlProvider.scala
│   │           │           │       ├── listener/
│   │           │           │       │   ├── FireSparkListener.scala
│   │           │           │       │   └── FireStreamingQueryListener.scala
│   │           │           │       ├── plugin/
│   │           │           │       │   └── SparkArthasLauncher.scala
│   │           │           │       ├── rest/
│   │           │           │       │   └── SparkSystemRestful.scala
│   │           │           │       ├── sink/
│   │           │           │       │   ├── FireSink.scala
│   │           │           │       │   └── JdbcStreamSink.scala
│   │           │           │       ├── sql/
│   │           │           │       │   ├── SparkSqlExtensionsParserBase.scala
│   │           │           │       │   ├── SparkSqlParserBase.scala
│   │           │           │       │   └── SqlExtensions.scala
│   │           │           │       ├── sync/
│   │           │           │       │   ├── DistributeSyncManager.scala
│   │           │           │       │   ├── SparkLineageAccumulatorManager.scala
│   │           │           │       │   └── SyncSparkEngine.scala
│   │           │           │       ├── task/
│   │           │           │       │   └── SparkInternalTask.scala
│   │           │           │       ├── udf/
│   │           │           │       │   └── UDFs.scala
│   │           │           │       └── util/
│   │           │           │           ├── RocketMQUtils.scala
│   │           │           │           ├── SparkSingletonFactory.scala
│   │           │           │           └── SparkUtils.scala
│   │           │           └── fire.scala
│   │           ├── scala-spark-2.3/
│   │           │   └── com.zto.fire.spark.sql/
│   │           │       ├── SparkSqlExtensionsParser.scala
│   │           │       └── SparkSqlParser.scala
│   │           ├── scala-spark-2.4/
│   │           │   └── com.zto.fire.spark.sql/
│   │           │       ├── SparkSqlExtensionsParser.scala
│   │           │       └── SparkSqlParser.scala
│   │           ├── scala-spark-3.0/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   └── sql/
│   │           │                       ├── SparkSqlExtensionsParser.scala
│   │           │                       └── SparkSqlParser.scala
│   │           ├── scala-spark-3.1/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   └── sql/
│   │           │                       ├── SparkSqlExtensionsParser.scala
│   │           │                       └── SparkSqlParser.scala
│   │           ├── scala-spark-3.2/
│   │           │   └── com/
│   │           │       └── zto/
│   │           │           └── fire/
│   │           │               └── spark/
│   │           │                   └── sql/
│   │           │                       ├── SparkSqlExtensionsParser.scala
│   │           │                       └── SparkSqlParser.scala
│   │           └── scala-spark-3.3/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── spark/
│   │                               └── sql/
│   │                                   ├── SparkSqlExtensionsParser.scala
│   │                                   └── SparkSqlParser.scala
│   └── pom.xml
├── fire-enhance/
│   ├── apache-arthas/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           └── java/
│   │               └── com/
│   │                   └── taobao/
│   │                       └── arthas/
│   │                           └── agent/
│   │                               └── attach/
│   │                                   └── ArthasAgent.java
│   ├── apache-flink/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java-flink-1.12/
│   │           │   └── org/
│   │           │       ├── apache/
│   │           │       │   └── flink/
│   │           │       │       ├── client/
│   │           │       │       │   └── deployment/
│   │           │       │       │       └── application/
│   │           │       │       │           └── ApplicationDispatcherBootstrap.java
│   │           │       │       ├── configuration/
│   │           │       │       │   └── GlobalConfiguration.java
│   │           │       │       ├── contrib/
│   │           │       │       │   └── streaming/
│   │           │       │       │       └── state/
│   │           │       │       │           ├── RocksDBStateBackend.java
│   │           │       │       │           └── restore/
│   │           │       │       │               └── RocksDBFullRestoreOperation.java
│   │           │       │       ├── runtime/
│   │           │       │       │   ├── checkpoint/
│   │           │       │       │   │   └── CheckpointCoordinator.java
│   │           │       │       │   └── util/
│   │           │       │       │       └── EnvironmentInformation.java
│   │           │       │       ├── table/
│   │           │       │       │   └── api/
│   │           │       │       │       └── internal/
│   │           │       │       │           └── TableEnvironmentImpl.java
│   │           │       │       └── util/
│   │           │       │           └── ExceptionUtils.java
│   │           │       └── rocksdb/
│   │           │           └── RocksDB.java
│   │           ├── java-flink-1.13/
│   │           │   └── org/
│   │           │       └── apache/
│   │           │           └── flink/
│   │           │               ├── client/
│   │           │               │   └── deployment/
│   │           │               │       └── application/
│   │           │               │           └── ApplicationDispatcherBootstrap.java
│   │           │               ├── configuration/
│   │           │               │   └── GlobalConfiguration.java
│   │           │               ├── contrib/
│   │           │               │   └── streaming/
│   │           │               │       └── state/
│   │           │               │           └── EmbeddedRocksDBStateBackend.java
│   │           │               ├── runtime/
│   │           │               │   ├── checkpoint/
│   │           │               │   │   └── CheckpointCoordinator.java
│   │           │               │   └── util/
│   │           │               │       └── EnvironmentInformation.java
│   │           │               ├── table/
│   │           │               │   └── api/
│   │           │               │       └── internal/
│   │           │               │           └── TableEnvironmentImpl.java
│   │           │               └── util/
│   │           │                   └── ExceptionUtils.java
│   │           └── java-flink-1.14/
│   │               └── org/
│   │                   ├── apache/
│   │                   │   └── flink/
│   │                   │       ├── client/
│   │                   │       │   └── deployment/
│   │                   │       │       └── application/
│   │                   │       │           └── ApplicationDispatcherBootstrap.java
│   │                   │       ├── configuration/
│   │                   │       │   └── GlobalConfiguration.java
│   │                   │       ├── connector/
│   │                   │       │   └── jdbc/
│   │                   │       │       ├── dialect/
│   │                   │       │       │   ├── AdbDialect.java
│   │                   │       │       │   ├── JdbcDialect.java
│   │                   │       │       │   ├── JdbcDialects.java
│   │                   │       │       │   ├── MySQLDialect.java
│   │                   │       │       │   └── OracleSQLDialect.java
│   │                   │       │       └── internal/
│   │                   │       │           └── converter/
│   │                   │       │               └── OracleSQLRowConverter.java
│   │                   │       ├── contrib/
│   │                   │       │   └── streaming/
│   │                   │       │       └── state/
│   │                   │       │           └── EmbeddedRocksDBStateBackend.java
│   │                   │       ├── runtime/
│   │                   │       │   ├── checkpoint/
│   │                   │       │   │   └── CheckpointCoordinator.java
│   │                   │       │   └── util/
│   │                   │       │       └── EnvironmentInformation.java
│   │                   │       ├── streaming/
│   │                   │       │   └── connectors/
│   │                   │       │       └── kafka/
│   │                   │       │           ├── FlinkKafkaConsumer.java
│   │                   │       │           └── FlinkKafkaConsumerBase.java
│   │                   │       ├── table/
│   │                   │       │   └── api/
│   │                   │       │       └── internal/
│   │                   │       │           └── TableEnvironmentImpl.java
│   │                   │       └── util/
│   │                   │           └── ExceptionUtils.java
│   │                   └── rocksdb/
│   │                       └── RocksDB.java
│   ├── apache-spark/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           └── scala-spark-3.0/
│   │               └── org/
│   │                   └── apache/
│   │                       └── spark/
│   │                           ├── internal/
│   │                           │   └── config/
│   │                           │       └── Streaming.scala
│   │                           ├── sql/
│   │                           │   └── execution/
│   │                           │       └── datasources/
│   │                           │           └── InsertIntoHadoopFsRelationCommand.scala
│   │                           └── streaming/
│   │                               └── scheduler/
│   │                                   └── ExecutorAllocationManager.scala
│   └── pom.xml
├── fire-examples/
│   ├── flink-examples/
│   │   ├── pom.xml
│   │   └── src/
│   │       ├── main/
│   │       │   ├── java/
│   │       │   │   └── com/
│   │       │   │       └── zto/
│   │       │   │           └── fire/
│   │       │   │               ├── examples/
│   │       │   │               │   └── bean/
│   │       │   │               │       ├── People.java
│   │       │   │               │       └── Student.java
│   │       │   │               └── sql/
│   │       │   │                   └── SqlCommandParser.java
│   │       │   ├── resources/
│   │       │   │   ├── META-INF/
│   │       │   │   │   └── services/
│   │       │   │   │       └── org.apache.flink.table.factories.Factory
│   │       │   │   ├── common.properties
│   │       │   │   ├── connector/
│   │       │   │   │   └── hive/
│   │       │   │   │       └── HiveSinkTest.properties
│   │       │   │   ├── log4j.properties
│   │       │   │   └── stream/
│   │       │   │       └── ConfigCenterTest.properties
│   │       │   └── scala/
│   │       │       └── com/
│   │       │           └── zto/
│   │       │               └── fire/
│   │       │                   └── examples/
│   │       │                       └── flink/
│   │       │                           ├── FlinkDemo.scala
│   │       │                           ├── FlinkSQLDemo.scala
│   │       │                           ├── Test.scala
│   │       │                           ├── acc/
│   │       │                           │   └── FlinkAccTest.scala
│   │       │                           ├── batch/
│   │       │                           │   ├── FireMapFunctionTest.scala
│   │       │                           │   ├── FlinkBatchTest.scala
│   │       │                           │   └── FlinkBrocastTest.scala
│   │       │                           ├── connector/
│   │       │                           │   ├── FlinkHudiTest.scala
│   │       │                           │   ├── bean/
│   │       │                           │   │   ├── BeanConnectorTest.scala
│   │       │                           │   │   ├── BeanDynamicTableFactory.scala
│   │       │                           │   │   ├── BeanDynamicTableSink.scala
│   │       │                           │   │   ├── BeanDynamicTableSource.scala
│   │       │                           │   │   └── BeanOptions.scala
│   │       │                           │   ├── clickhouse/
│   │       │                           │   │   └── ClickhouseTest.scala
│   │       │                           │   ├── hive/
│   │       │                           │   │   ├── HiveBatchSinkTest.scala
│   │       │                           │   │   └── HiveSinkTest.scala
│   │       │                           │   ├── kafka/
│   │       │                           │   │   └── KafkaConsumer.scala
│   │       │                           │   ├── rocketmq/
│   │       │                           │   │   ├── RocketMQConnectorTest.scala
│   │       │                           │   │   └── RocketTest.scala
│   │       │                           │   └── sql/
│   │       │                           │       ├── DDL.scala
│   │       │                           │       └── DataGenTest.scala
│   │       │                           ├── lineage/
│   │       │                           │   ├── FlinkSqlLineageTest.scala
│   │       │                           │   └── LineageTest.scala
│   │       │                           ├── module/
│   │       │                           │   ├── ArthasTest.scala
│   │       │                           │   └── ExceptionTest.scala
│   │       │                           ├── sql/
│   │       │                           │   ├── HiveDimDemo.scala
│   │       │                           │   ├── HiveWriteDemo.scala
│   │       │                           │   ├── JdbcDimDemo.scala
│   │       │                           │   ├── RocketMQConnectorTest.scala
│   │       │                           │   ├── SimpleSqlDemo.scala
│   │       │                           │   └── SqlJoinDemo.scala
│   │       │                           ├── stream/
│   │       │                           │   ├── ConfigCenterTest.scala
│   │       │                           │   ├── FlinkHiveTest.scala
│   │       │                           │   ├── FlinkPartitioner.scala
│   │       │                           │   ├── FlinkRetractStreamTest.scala
│   │       │                           │   ├── FlinkSinkHiveTest.scala
│   │       │                           │   ├── FlinkSinkTest.scala
│   │       │                           │   ├── FlinkSourceTest.scala
│   │       │                           │   ├── FlinkStateTest.scala
│   │       │                           │   ├── HBaseTest.scala
│   │       │                           │   ├── HiveRW.scala
│   │       │                           │   ├── JdbcTest.scala
│   │       │                           │   ├── UDFTest.scala
│   │       │                           │   ├── WatermarkTest.scala
│   │       │                           │   └── WindowTest.scala
│   │       │                           └── util/
│   │       │                               └── StateCleaner.scala
│   │       └── test/
│   │           └── scala/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── examples/
│   │                               └── flink/
│   │                                   ├── anno/
│   │                                   │   └── AnnoConfTest.scala
│   │                                   ├── core/
│   │                                   │   └── BaseFlinkTester.scala
│   │                                   └── jdbc/
│   │                                       └── JdbcUnitTest.scala
│   ├── pom.xml
│   └── spark-examples/
│       ├── pom.xml
│       └── src/
│           ├── main/
│           │   ├── java/
│           │   │   └── com/
│           │   │       └── zto/
│           │   │           └── fire/
│           │   │               └── examples/
│           │   │                   └── bean/
│           │   │                       ├── Hudi.java
│           │   │                       ├── Student.java
│           │   │                       └── StudentMulti.java
│           │   ├── resources/
│           │   │   ├── common.properties
│           │   │   ├── jdbc/
│           │   │   │   └── JdbcTest.properties
│           │   │   └── streaming/
│           │   │       └── ConfigCenterTest.properties
│           │   └── scala/
│           │       └── com/
│           │           └── zto/
│           │               └── fire/
│           │                   └── examples/
│           │                       └── spark/
│           │                           ├── SparkDemo.scala
│           │                           ├── SparkSQLDemo.scala
│           │                           ├── Test.scala
│           │                           ├── acc/
│           │                           │   └── FireAccTest.scala
│           │                           ├── hbase/
│           │                           │   ├── HBaseConnectorTest.scala
│           │                           │   ├── HBaseHadoopTest.scala
│           │                           │   ├── HBaseStreamingTest.scala
│           │                           │   ├── HbaseBulkTest.scala
│           │                           │   └── HiveQL.scala
│           │                           ├── hive/
│           │                           │   ├── HiveClusterReader.scala
│           │                           │   ├── HiveMetadataTest.scala
│           │                           │   └── HiveRW.scala
│           │                           ├── jdbc/
│           │                           │   ├── JdbcStreamingTest.scala
│           │                           │   └── JdbcTest.scala
│           │                           ├── lineage/
│           │                           │   ├── DataSourceTest.scala
│           │                           │   ├── LineageTest.scala
│           │                           │   └── SparkCoreLineageTest.scala
│           │                           ├── module/
│           │                           │   ├── ArthasTest.scala
│           │                           │   └── ExceptionTest.scala
│           │                           ├── schedule/
│           │                           │   ├── ScheduleTest.scala
│           │                           │   └── Tasks.scala
│           │                           ├── sql/
│           │                           │   ├── LoadTestSQL.scala
│           │                           │   └── SparkSqlParseTest.scala
│           │                           ├── streaming/
│           │                           │   ├── AtLeastOnceTest.scala
│           │                           │   ├── ConfigCenterTest.scala
│           │                           │   ├── DataGenTest.scala
│           │                           │   ├── KafkaTest.scala
│           │                           │   └── RocketTest.scala
│           │                           ├── structured/
│           │                           │   ├── JdbcSinkTest.scala
│           │                           │   ├── MapTest.scala
│           │                           │   └── StructuredStreamingTest.scala
│           │                           └── thread/
│           │                               └── ThreadTest.scala
│           └── test/
│               ├── resources/
│               │   ├── ConfigCenterUnitTest.properties
│               │   ├── SparkSQLParserTest.properties
│               │   └── common.properties
│               ├── scala/
│               │   └── com/
│               │       └── zto/
│               │           └── fire/
│               │               └── examples/
│               │                   └── spark/
│               │                       ├── anno/
│               │                       │   └── AnnoConfTest.scala
│               │                       ├── conf/
│               │                       │   └── ConfigCenterUnitTest.scala
│               │                       ├── core/
│               │                       │   └── BaseSparkTester.scala
│               │                       ├── hbase/
│               │                       │   ├── HBaseApiTest.scala
│               │                       │   ├── HBaseBaseTester.scala
│               │                       │   ├── HBaseBulkUnitTest.scala
│               │                       │   ├── HBaseConnectorUnitTest.scala
│               │                       │   └── HBaseHadoopUnitTest.scala
│               │                       ├── hive/
│               │                       │   └── HiveUnitTest.scala
│               │                       ├── jdbc/
│               │                       │   ├── JdbcConnectorTest.scala
│               │                       │   └── JdbcUnitTest.scala
│               │                       └── parser/
│               │                           └── SparkSQLParserTest.scala
│               └── scala-spark-3.0/
│                   └── com/
│                       └── zto/
│                           └── fire/
│                               └── examples/
│                                   └── spark/
│                                       └── sql/
│                                           └── SparkSqlParseTest.scala
├── fire-external/
│   ├── .gitignore
│   ├── fire-apollo/
│   │   ├── pom.xml
│   │   └── src/
│   │       ├── main/
│   │       │   ├── resources/
│   │       │   │   └── apollo.properties
│   │       │   └── scala/
│   │       │       └── com/
│   │       │           └── zto/
│   │       │               └── fire/
│   │       │                   └── apollo/
│   │       │                       └── util/
│   │       │                           ├── ApolloConfigUtil.scala
│   │       │                           └── ApolloConstant.scala
│   │       └── test/
│   │           └── scala/
│   │               └── com/
│   │                   └── zto/
│   │                       └── fire/
│   │                           └── apollo/
│   │                               └── util/
│   │                                   └── ApolloConfigUtilTest.scala
│   └── pom.xml
├── fire-metrics/
│   ├── pom.xml
│   └── src/
│       ├── main/
│       │   └── java/
│       │       └── com/
│       │           └── zto/
│       │               └── fire/
│       │                   └── metrics/
│       │                       └── MetricsDemo.scala
│       └── test/
│           ├── java/
│           │   └── com/
│           │       └── zto/
│           │           └── fire/
│           │               └── jmx/
│           │                   ├── Hello.java
│           │                   ├── HelloMBean.java
│           │                   ├── JmxApp.java
│           │                   ├── QueueSample.java
│           │                   ├── QueueSampler.java
│           │                   └── QueueSamplerMXBean.java
│           └── scala/
│               └── com.zto.fire.metrics/
│                   └── MetricsTest.scala
├── fire-platform/
│   └── pom.xml
├── fire-shell/
│   ├── flink-shell/
│   │   ├── pom.xml
│   │   └── src/
│   │       └── main/
│   │           ├── java/
│   │           │   └── org/
│   │           │       └── apache/
│   │           │           └── flink/
│   │           │               └── api/
│   │           │                   └── java/
│   │           │                       ├── JarHelper.java
│   │           │                       ├── ScalaShellEnvironment.java
│   │           │                       └── ScalaShellStreamEnvironment.java
│   │           ├── java-flink-1.12/
│   │           │   └── org.apache.flink.streaming.api.environment/
│   │           │       └── StreamExecutionEnvironment.java
│   │           ├── java-flink-1.13/
│   │           │   └── org.apache.flink.streaming.api.environment/
│   │           │       └── StreamExecutionEnvironment.java
│   │           └── scala/
│   │               ├── com/
│   │               │   └── zto/
│   │               │       └── fire/
│   │               │           └── shell/
│   │               │               └── flink/
│   │               │                   ├── FireILoop.scala
│   │               │                   └── Test.scala
│   │               └── org/
│   │                   └── apache/
│   │                       └── flink/
│   │                           └── api/
│   │                               └── scala/
│   │                                   └── FlinkShell.scala
│   ├── pom.xml
│   └── spark-shell/
│       ├── pom.xml
│       └── src/
│           └── main/
│               └── scala-spark-3.0/
│                   ├── com/
│                   │   └── zto/
│                   │       └── fire/
│                   │           └── shell/
│                   │               └── spark/
│                   │                   ├── FireILoop.scala
│                   │                   ├── Main.scala
│                   │                   └── Test.scala
│                   └── org/
│                       └── apache/
│                           └── spark/
│                               └── repl/
│                                   ├── ExecutorClassLoader.scala
│                                   └── Signaling.scala
└── pom.xml
Download .txt
Showing preview only (315K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (3524 symbols across 198 files)

FILE: fire-common/src/main/java/com/zto/fire/common/bean/FireTask.java
  class FireTask (line 28) | public class FireTask {
    method FireTask (line 96) | public FireTask() {
    method getEngine (line 112) | public String getEngine() {
    method setEngine (line 116) | public void setEngine(String engine) {
    method getIp (line 120) | public String getIp() {
    method setIp (line 124) | public void setIp(String ip) {
    method getHostname (line 128) | public String getHostname() {
    method setHostname (line 132) | public void setHostname(String hostname) {
    method getPid (line 136) | public String getPid() {
    method setPid (line 140) | public void setPid(String pid) {
    method getMainClass (line 144) | public String getMainClass() {
    method setMainClass (line 148) | public void setMainClass(String mainClass) {
    method getTimestamp (line 152) | public String getTimestamp() {
    method setTimestamp (line 156) | public void setTimestamp(String timestamp) {
    method getEngineVersion (line 160) | public String getEngineVersion() {
    method setEngineVersion (line 164) | public void setEngineVersion(String engineVersion) {
    method getFireVersion (line 168) | public String getFireVersion() {
    method setFireVersion (line 172) | public void setFireVersion(String fireVersion) {
    method getLaunchTime (line 176) | public String getLaunchTime() {
    method setLaunchTime (line 180) | public void setLaunchTime(String launchTime) {
    method getUptime (line 184) | public Long getUptime() {
    method setUptime (line 188) | public void setUptime(Long uptime) {
    method getAppId (line 192) | public String getAppId() {
    method setAppId (line 196) | public void setAppId(String appId) {
    method getDeployMode (line 200) | public String getDeployMode() {
    method setDeployMode (line 204) | public void setDeployMode(String deployMode) {
    method getJobType (line 208) | public String getJobType() {
    method setJobType (line 212) | public void setJobType(String jobType) {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/analysis/ExceptionMsg.java
  class ExceptionMsg (line 29) | public class ExceptionMsg extends FireTask {
    method ExceptionMsg (line 51) | public ExceptionMsg() {
    method ExceptionMsg (line 55) | public ExceptionMsg(String stackTitle, String stackTrace, String excep...
    method ExceptionMsg (line 69) | public ExceptionMsg(Throwable e, String sql) {
    method ExceptionMsg (line 73) | public ExceptionMsg(Throwable e) {
    method ExceptionMsg (line 77) | public ExceptionMsg(String stackTitle, String stackTrace, String excep...
    method ExceptionMsg (line 81) | public ExceptionMsg(String stackTrace) {
    method getEngine (line 85) | public String getEngine() {
    method setEngine (line 89) | public void setEngine(String engine) {
    method getStackTitle (line 93) | public String getStackTitle() {
    method setStackTitle (line 97) | public void setStackTitle(String stackTitle) {
    method getStackTrace (line 101) | public String getStackTrace() {
    method setStackTrace (line 105) | public void setStackTrace(String stackTrace) {
    method getSql (line 109) | public String getSql() {
    method setSql (line 113) | public void setSql(String sql) {
    method getIp (line 117) | public String getIp() {
    method setIp (line 121) | public void setIp(String ip) {
    method getHostname (line 125) | public String getHostname() {
    method setHostname (line 129) | public void setHostname(String hostname) {
    method getPid (line 133) | public String getPid() {
    method setPid (line 137) | public void setPid(String pid) {
    method getMainClass (line 141) | public String getMainClass() {
    method setMainClass (line 145) | public void setMainClass(String mainClass) {
    method getTimestamp (line 149) | public String getTimestamp() {
    method setTimestamp (line 153) | public void setTimestamp(String timestamp) {
    method toString (line 157) | @Override

FILE: fire-common/src/main/java/com/zto/fire/common/bean/config/ConfigurationParam.java
  class ConfigurationParam (line 32) | public class ConfigurationParam {
    method getCode (line 36) | public Integer getCode() {
    method setCode (line 40) | public void setCode(Integer code) {
    method getContent (line 44) | public Map<ConfigureLevel, Map<String, String>> getContent() {
    method setContent (line 48) | public void setContent(Map<ConfigureLevel, Map<String, String>> conten...

FILE: fire-common/src/main/java/com/zto/fire/common/bean/lineage/Lineage.java
  class Lineage (line 28) | public class Lineage extends FireTask {
    method Lineage (line 40) | public Lineage() {
    method Lineage (line 44) | public Lineage(Object lineage) {
    method Lineage (line 49) | public Lineage(Object lineage, SQLLineage sql) {
    method getDatasource (line 54) | public Object getDatasource() {
    method setDatasource (line 58) | public void setDatasource(Object datasource) {
    method getSql (line 62) | public SQLLineage getSql() {
    method setSql (line 66) | public void setSql(SQLLineage sql) {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLLineage.java
  class SQLLineage (line 31) | public class SQLLineage implements DatasourceDesc {
    method SQLLineage (line 48) | public SQLLineage() {
    method getStatements (line 54) | public List<String> getStatements() {
    method setStatements (line 58) | public void setStatements(List<String> statements) {
    method setTables (line 62) | public void setTables(List<SQLTable> tables) {
    method getTables (line 66) | public List<SQLTable> getTables() {
    method setRelations (line 70) | public void setRelations(List<SQLTableRelations> relations) {
    method getRelations (line 74) | public List<SQLTableRelations> getRelations() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTable.java
  class SQLTable (line 28) | public class SQLTable {
    method SQLTable (line 75) | public SQLTable() {
    method SQLTable (line 82) | public SQLTable(String physicalTable) {
    method SQLTable (line 87) | public SQLTable(String catalog, String cluster, String physicalTable, ...
    method setCatalog (line 100) | public void setCatalog(String catalog) {
    method getCatalog (line 104) | public String getCatalog() {
    method setCluster (line 108) | public void setCluster(String cluster) {
    method getCluster (line 112) | public String getCluster() {
    method setPhysicalTable (line 116) | public void setPhysicalTable(String physicalTable) {
    method getPhysicalTable (line 120) | public String getPhysicalTable() {
    method setTmpView (line 124) | public void setTmpView(String tmpView) {
    method getTmpView (line 128) | public String getTmpView() {
    method getOptions (line 132) | public Map<String, String> getOptions() {
    method setOptions (line 136) | public void setOptions(Map<String, String> options) {
    method setOperation (line 140) | public void setOperation(Set<String> operation) {
    method getOperation (line 144) | public Set<String> getOperation() {
    method setColumns (line 148) | public void setColumns(HashSet<SQLTableColumns> columns) {
    method getColumns (line 152) | public Set<SQLTableColumns> getColumns() {
    method getPartitions (line 156) | public Set<SQLTablePartitions> getPartitions() {
    method setPartitions (line 160) | public void setPartitions(HashSet<SQLTablePartitions> partitions) {
    method getComment (line 164) | public String getComment() {
    method setComment (line 168) | public void setComment(String comment) {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTableColumns.java
  class SQLTableColumns (line 28) | public class SQLTableColumns {
    method SQLTableColumns (line 41) | public SQLTableColumns() {
    method SQLTableColumns (line 44) | public SQLTableColumns(String name, String type) {
    method setName (line 49) | public void setName(String name) {
    method getName (line 53) | public String getName() {
    method setType (line 57) | public void setType(String type) {
    method getType (line 61) | public String getType() {
    method equals (line 65) | @Override
    method hashCode (line 77) | @Override

FILE: fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTablePartitions.java
  class SQLTablePartitions (line 28) | public class SQLTablePartitions {
    method SQLTablePartitions (line 41) | public SQLTablePartitions() {
    method SQLTablePartitions (line 44) | public SQLTablePartitions(String name, String value) {
    method setName (line 49) | public void setName(String name) {
    method getName (line 53) | public String getName() {
    method setValue (line 57) | public void setValue(String value) {
    method getValue (line 61) | public String getValue() {
    method equals (line 65) | @Override
    method hashCode (line 77) | @Override

FILE: fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTableRelations.java
  class SQLTableRelations (line 28) | public class SQLTableRelations {
    method SQLTableRelations (line 40) | public SQLTableRelations() {
    method SQLTableRelations (line 43) | public SQLTableRelations(String srcTable, String sinkTable) {
    method setSrcTable (line 48) | public void setSrcTable(String srcTable) {
    method getSrcTable (line 52) | public String getSrcTable() {
    method setSinkTable (line 56) | public void setSinkTable(String sinkTable) {
    method getSinkTable (line 60) | public String getSinkTable() {
    method equals (line 64) | @Override
    method hashCode (line 76) | @Override

FILE: fire-common/src/main/java/com/zto/fire/common/bean/rest/ResultMsg.java
  class ResultMsg (line 31) | public class ResultMsg {
    method isSuccess (line 45) | public static boolean isSuccess(ResultMsg resultMsg) {
    method getMsg (line 55) | public static String getMsg(ResultMsg resultMsg) {
    method getCode (line 68) | public static ErrorCode getCode(ResultMsg resultMsg) {
    method ResultMsg (line 75) | public ResultMsg() {
    method ResultMsg (line 78) | public ResultMsg(String content, ErrorCode code, String msg) {
    method getContent (line 84) | public Object getContent() {
    method setContent (line 88) | public void setContent(Object content) {
    method getCode (line 92) | public ErrorCode getCode() {
    method setCode (line 96) | public void setCode(ErrorCode code) {
    method getMsg (line 100) | public String getMsg() {
    method setMsg (line 104) | public void setMsg(String msg) {
    method buildSuccess (line 111) | public static String buildSuccess(Object content, String msg) {
    method buildError (line 118) | public static String buildError(String msg, ErrorCode errorCode) {
    method toString (line 122) | @Override

FILE: fire-common/src/main/java/com/zto/fire/common/bean/rest/yarn/App.java
  class App (line 24) | public class App {
    method getId (line 80) | public String getId() {
    method setId (line 84) | public void setId(String id) {
    method getUser (line 88) | public String getUser() {
    method setUser (line 92) | public void setUser(String user) {
    method getName (line 96) | public String getName() {
    method setName (line 100) | public void setName(String name) {
    method getQueue (line 104) | public String getQueue() {
    method setQueue (line 108) | public void setQueue(String queue) {
    method getState (line 112) | public String getState() {
    method setState (line 116) | public void setState(String state) {
    method getFinalStatus (line 120) | public String getFinalStatus() {
    method setFinalStatus (line 124) | public void setFinalStatus(String finalStatus) {
    method getProgress (line 128) | public Double getProgress() {
    method setProgress (line 132) | public void setProgress(Double progress) {
    method getTrackingUI (line 136) | public String getTrackingUI() {
    method setTrackingUI (line 140) | public void setTrackingUI(String trackingUI) {
    method getTrackingUrl (line 144) | public String getTrackingUrl() {
    method setTrackingUrl (line 148) | public void setTrackingUrl(String trackingUrl) {
    method getDiagnostics (line 152) | public String getDiagnostics() {
    method setDiagnostics (line 156) | public void setDiagnostics(String diagnostics) {
    method getClusterId (line 160) | public Long getClusterId() {
    method setClusterId (line 164) | public void setClusterId(Long clusterId) {
    method getApplicationType (line 168) | public String getApplicationType() {
    method setApplicationType (line 172) | public void setApplicationType(String applicationType) {
    method getApplicationTags (line 176) | public String getApplicationTags() {
    method setApplicationTags (line 180) | public void setApplicationTags(String applicationTags) {
    method getStartedTime (line 184) | public Long getStartedTime() {
    method setStartedTime (line 188) | public void setStartedTime(Long startedTime) {
    method getFinishedTime (line 192) | public Long getFinishedTime() {
    method setFinishedTime (line 196) | public void setFinishedTime(Long finishedTime) {
    method getElapsedTime (line 200) | public Long getElapsedTime() {
    method setElapsedTime (line 204) | public void setElapsedTime(Long elapsedTime) {
    method getAmContainerLogs (line 208) | public String getAmContainerLogs() {
    method setAmContainerLogs (line 212) | public void setAmContainerLogs(String amContainerLogs) {
    method getAmHostHttpAddress (line 216) | public String getAmHostHttpAddress() {
    method setAmHostHttpAddress (line 220) | public void setAmHostHttpAddress(String amHostHttpAddress) {
    method getAllocatedMB (line 224) | public Long getAllocatedMB() {
    method setAllocatedMB (line 228) | public void setAllocatedMB(Long allocatedMB) {
    method getAllocatedVCores (line 232) | public Long getAllocatedVCores() {
    method setAllocatedVCores (line 236) | public void setAllocatedVCores(Long allocatedVCores) {
    method getRunningContainers (line 240) | public Long getRunningContainers() {
    method setRunningContainers (line 244) | public void setRunningContainers(Long runningContainers) {
    method getMemorySeconds (line 248) | public Long getMemorySeconds() {
    method setMemorySeconds (line 252) | public void setMemorySeconds(Long memorySeconds) {
    method getVcoreSeconds (line 256) | public Long getVcoreSeconds() {
    method setVcoreSeconds (line 260) | public void setVcoreSeconds(Long vcoreSeconds) {
    method getPreemptedResourceMB (line 264) | public Long getPreemptedResourceMB() {
    method setPreemptedResourceMB (line 268) | public void setPreemptedResourceMB(Long preemptedResourceMB) {
    method getPreemptedResourceVCores (line 272) | public Long getPreemptedResourceVCores() {
    method setPreemptedResourceVCores (line 276) | public void setPreemptedResourceVCores(Long preemptedResourceVCores) {
    method getNumNonAMContainerPreempted (line 280) | public Long getNumNonAMContainerPreempted() {
    method setNumNonAMContainerPreempted (line 284) | public void setNumNonAMContainerPreempted(Long numNonAMContainerPreemp...
    method getNumAMContainerPreempted (line 288) | public Long getNumAMContainerPreempted() {
    method setNumAMContainerPreempted (line 292) | public void setNumAMContainerPreempted(Long numAMContainerPreempted) {
    method getLogAggregationStatus (line 296) | public String getLogAggregationStatus() {
    method setLogAggregationStatus (line 300) | public void setLogAggregationStatus(String logAggregationStatus) {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/ClassLoaderInfo.java
  class ClassLoaderInfo (line 28) | public class ClassLoaderInfo implements Serializable {
    method ClassLoaderInfo (line 46) | private ClassLoaderInfo() {}
    method getLoadedClassCount (line 48) | public long getLoadedClassCount() {
    method getTotalLoadedClassCount (line 52) | public long getTotalLoadedClassCount() {
    method getUnloadedClassCount (line 56) | public long getUnloadedClassCount() {
    method getClassLoaderInfo (line 63) | public static ClassLoaderInfo getClassLoaderInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/CpuInfo.java
  class CpuInfo (line 37) | public class CpuInfo implements Serializable {
    method getLoadAverage (line 140) | public double[] getLoadAverage() {
    method getLastLoadAverage (line 144) | public double getLastLoadAverage() {
    method getCpuLoad (line 148) | public double getCpuLoad() {
    method getAvailableProcessors (line 152) | public int getAvailableProcessors() {
    method getProcessCpuTime (line 156) | public long getProcessCpuTime() {
    method getProcessCpuLoad (line 160) | public double getProcessCpuLoad() {
    method getTemperature (line 164) | public String getTemperature() {
    method getVoltage (line 168) | public String getVoltage() {
    method getFanSpeeds (line 172) | public int[] getFanSpeeds() {
    method getPhysicalCpu (line 176) | public int getPhysicalCpu() {
    method getLogicalCpu (line 180) | public int getLogicalCpu() {
    method getUptime (line 184) | public String getUptime() {
    method getIoWait (line 188) | public long getIoWait() {
    method getUserTick (line 192) | public long getUserTick() {
    method getNiceTick (line 196) | public long getNiceTick() {
    method getSysTick (line 200) | public long getSysTick() {
    method getIdleTick (line 204) | public long getIdleTick() {
    method getIrqTick (line 208) | public long getIrqTick() {
    method getSoftIrqTick (line 212) | public long getSoftIrqTick() {
    method getStealTick (line 216) | public long getStealTick() {
    method CpuInfo (line 220) | private CpuInfo() {
    method getCpuInfo (line 226) | public static CpuInfo getCpuInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/DiskInfo.java
  class DiskInfo (line 37) | public class DiskInfo {
    class DiskPartitionInfo (line 71) | private static class DiskPartitionInfo {
      method DiskPartitionInfo (line 95) | public DiskPartitionInfo() {
      method DiskPartitionInfo (line 98) | public DiskPartitionInfo(String name, String fileSystem, String moun...
      method getName (line 112) | public String getName() {
      method getFileSystem (line 116) | public String getFileSystem() {
      method getMount (line 120) | public String getMount() {
      method getTotal (line 124) | public long getTotal() {
      method getFree (line 128) | public long getFree() {
      method getUsed (line 132) | public long getUsed() {
      method getTotalInodes (line 136) | public long getTotalInodes() {
      method getFreeInodes (line 140) | public long getFreeInodes() {
      method getUsedInodes (line 144) | public long getUsedInodes() {
      method getUsedPer (line 148) | public String getUsedPer() {
      method getUsedInodesPer (line 152) | public String getUsedInodesPer() {
    method getName (line 157) | public String getName() {
    method getModel (line 161) | public String getModel() {
    method getTotal (line 165) | public long getTotal() {
    method getReads (line 169) | public long getReads() {
    method getWrites (line 173) | public long getWrites() {
    method getTransferTime (line 177) | public long getTransferTime() {
    method DiskInfo (line 181) | private DiskInfo() {
    method DiskInfo (line 184) | private DiskInfo(String name, String model, long total, long reads, lo...
    method getDiskInfo (line 196) | public static Map<String, Object> getDiskInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/DisplayInfo.java
  class DisplayInfo (line 27) | public class DisplayInfo {
    method getDisplay (line 34) | public String getDisplay() {
    method DisplayInfo (line 38) | private DisplayInfo() {
    method getDisplayInfo (line 44) | public static DisplayInfo getDisplayInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/HardwareInfo.java
  class HardwareInfo (line 31) | public class HardwareInfo {
    method getManufacturer (line 59) | public String getManufacturer() {
    method getModel (line 63) | public String getModel() {
    method getSerialNumber (line 67) | public String getSerialNumber() {
    method getPower (line 71) | public String getPower() {
    method getBatteryCapacity (line 75) | public String getBatteryCapacity() {
    method HardwareInfo (line 79) | private HardwareInfo() {
    method getHardwareInfo (line 85) | public static HardwareInfo getHardwareInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/JvmInfo.java
  class JvmInfo (line 28) | public class JvmInfo implements Serializable {
    method JvmInfo (line 135) | private JvmInfo() {}
    method getMemoryMax (line 137) | public long getMemoryMax() {
    method getMemoryTotal (line 141) | public long getMemoryTotal() {
    method getMemoryFree (line 145) | public long getMemoryFree() {
    method getMemoryUsed (line 149) | public long getMemoryUsed() {
    method getStartTime (line 153) | public long getStartTime() {
    method getUptime (line 157) | public long getUptime() {
    method getHeapInitSize (line 161) | public long getHeapInitSize() {
    method getHeapMaxSize (line 165) | public long getHeapMaxSize() {
    method getHeapUseSize (line 169) | public long getHeapUseSize() {
    method getHeapCommitedSize (line 173) | public long getHeapCommitedSize() {
    method getNonHeapInitSize (line 177) | public long getNonHeapInitSize() {
    method getNonHeapMaxSize (line 181) | public long getNonHeapMaxSize() {
    method getNonHeapUseSize (line 185) | public long getNonHeapUseSize() {
    method getNonHeapCommittedSize (line 189) | public long getNonHeapCommittedSize() {
    method getJavaVersion (line 193) | public String getJavaVersion() {
    method getJavaHome (line 197) | public String getJavaHome() {
    method getClassVersion (line 201) | public String getClassVersion() {
    method getMinorGCCount (line 205) | public long getMinorGCCount() {
    method getMinorGCTime (line 209) | public long getMinorGCTime() {
    method getFullGCCount (line 213) | public long getFullGCCount() {
    method getFullGCTime (line 217) | public long getFullGCTime() {
    method getJvmOptions (line 221) | public List<String> getJvmOptions() {
    method getJvmInfo (line 228) | public static JvmInfo getJvmInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/MemoryInfo.java
  class MemoryInfo (line 29) | public class MemoryInfo implements Serializable {
    method MemoryInfo (line 67) | private MemoryInfo() {}
    method getTotal (line 69) | public long getTotal() {
    method getFree (line 73) | public long getFree() {
    method getUsed (line 77) | public long getUsed() {
    method getCommitVirtual (line 81) | public long getCommitVirtual() {
    method getSwapTotal (line 85) | public long getSwapTotal() {
    method getSwapFree (line 89) | public long getSwapFree() {
    method getSwapUsed (line 93) | public long getSwapUsed() {
    method getMemoryInfo (line 100) | public static MemoryInfo getMemoryInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/NetworkInfo.java
  class NetworkInfo (line 33) | public class NetworkInfo {
    method getName (line 120) | public String getName() {
    method getDisplayName (line 124) | public String getDisplayName() {
    method getMacAddress (line 128) | public String getMacAddress() {
    method getMtu (line 132) | public int getMtu() {
    method getSpeed (line 136) | public long getSpeed() {
    method getIpv4 (line 140) | public String[] getIpv4() {
    method getIpv6 (line 144) | public String[] getIpv6() {
    method getIp (line 148) | public String getIp() {
    method getPacketsRecv (line 152) | public long getPacketsRecv() {
    method getPacketsSent (line 156) | public long getPacketsSent() {
    method getBytesRecv (line 160) | public long getBytesRecv() {
    method getBytesSent (line 164) | public long getBytesSent() {
    method getHostname (line 168) | public String getHostname() {
    method getDomainName (line 172) | public String getDomainName() {
    method getDns (line 176) | public String[] getDns() {
    method getIpv4Gateway (line 180) | public String getIpv4Gateway() {
    method getIpv6Gateway (line 184) | public String getIpv6Gateway() {
    method NetworkInfo (line 188) | private NetworkInfo() {}
    method getNetworkInfo (line 190) | public static List<NetworkInfo> getNetworkInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/OSInfo.java
  class OSInfo (line 30) | public class OSInfo {
    method OSInfo (line 88) | private OSInfo() {
    method getName (line 91) | public String getName() {
    method getArch (line 95) | public String getArch() {
    method getVersion (line 99) | public String getVersion() {
    method getUserName (line 103) | public String getUserName() {
    method getUserHome (line 107) | public String getUserHome() {
    method getUserDir (line 111) | public String getUserDir() {
    method getIp (line 115) | public String getIp() {
    method getHostname (line 119) | public String getHostname() {
    method getManufacturer (line 123) | public String getManufacturer() {
    method getUptime (line 127) | public String getUptime() {
    method getFamily (line 131) | public String getFamily() {
    method getOSInfo (line 138) | public static OSInfo getOSInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/RuntimeInfo.java
  class RuntimeInfo (line 30) | public class RuntimeInfo implements Serializable {
    method RuntimeInfo (line 79) | private RuntimeInfo() {
    method getJvmInfo (line 82) | public JvmInfo getJvmInfo() {
    method getThreadInfo (line 86) | public ThreadInfo getThreadInfo() {
    method getCpuInfo (line 90) | public CpuInfo getCpuInfo() {
    method getMemoryInfo (line 94) | public MemoryInfo getMemoryInfo() {
    method getClassLoaderInfo (line 98) | public ClassLoaderInfo getClassLoaderInfo() {
    method getIp (line 102) | public String getIp() {
    method getHostname (line 106) | public String getHostname() {
    method getPid (line 110) | public String getPid() {
    method getStartTime (line 114) | public long getStartTime() {
    method getUptime (line 118) | public long getUptime() {
    method getRuntimeInfo (line 128) | public static RuntimeInfo getRuntimeInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/ThreadInfo.java
  class ThreadInfo (line 29) | public class ThreadInfo implements Serializable {
    method ThreadInfo (line 62) | private ThreadInfo() {}
    method getCpuTime (line 64) | public long getCpuTime() {
    method getUserTime (line 68) | public long getUserTime() {
    method getDeamonCount (line 72) | public int getDeamonCount() {
    method getPeakCount (line 76) | public int getPeakCount() {
    method getTotalCount (line 80) | public int getTotalCount() {
    method getTotalStartedCount (line 84) | public long getTotalStartedCount() {
    method getThreadInfo (line 91) | public static ThreadInfo getThreadInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/bean/runtime/UsbInfo.java
  class UsbInfo (line 30) | public class UsbInfo {
    method getName (line 57) | public String getName() {
    method getProductId (line 61) | public String getProductId() {
    method getVendor (line 65) | public String getVendor() {
    method getVendorId (line 69) | public String getVendorId() {
    method getSerialNumber (line 73) | public String getSerialNumber() {
    method UsbInfo (line 77) | private UsbInfo() {}
    method UsbInfo (line 79) | public UsbInfo(String name, String productId, String vendor, String ve...
    method getUsbInfo (line 90) | public static List<UsbInfo> getUsbInfo() {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/ConfigureLevel.java
  type ConfigureLevel (line 26) | public enum ConfigureLevel {
    method ConfigureLevel (line 31) | ConfigureLevel(int level) {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/Datasource.java
  type Datasource (line 29) | public enum Datasource  {
    method Datasource (line 35) | Datasource(int type) {
    method parse (line 41) | public static Datasource parse(String dataSource) {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/ErrorCode.java
  type ErrorCode (line 24) | public enum ErrorCode {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/JdbcDriver.java
  type JdbcDriver (line 25) | public enum JdbcDriver {
    method JdbcDriver (line 40) | JdbcDriver(String driver) {
    method getDriver (line 44) | public String getDriver() {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/JobType.java
  type JobType (line 25) | public enum JobType {
    method JobType (line 33) | JobType(String jobType) {
    method getJobTypeDesc (line 42) | public String getJobTypeDesc() {
    method isSpark (line 51) | public boolean isSpark() {
    method isFlink (line 60) | public boolean isFlink() {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/Operation.java
  type Operation (line 11) | public enum Operation {
    method Operation (line 18) | Operation(int type) {
    method parse (line 24) | public static Operation parse(String operation) {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/ThreadPoolType.java
  type ThreadPoolType (line 24) | public enum ThreadPoolType {

FILE: fire-common/src/main/java/com/zto/fire/common/enu/YarnState.java
  type YarnState (line 27) | public enum YarnState {
    method YarnState (line 41) | YarnState(String state) {
    method getState (line 45) | public String getState() {
    method getState (line 55) | public static YarnState getState(String state) {

FILE: fire-common/src/main/java/com/zto/fire/common/exception/FireException.java
  class FireException (line 26) | public class FireException extends Exception {
    method FireException (line 28) | public FireException() {
    method FireException (line 32) | public FireException(String message) {
    method FireException (line 36) | public FireException(String message, Throwable cause) {

FILE: fire-common/src/main/java/com/zto/fire/common/exception/FireFlinkException.java
  class FireFlinkException (line 26) | public class FireFlinkException extends FireException {
    method FireFlinkException (line 28) | public FireFlinkException() {
    method FireFlinkException (line 32) | public FireFlinkException(String message) {
    method FireFlinkException (line 36) | public FireFlinkException(String message, Throwable cause) {

FILE: fire-common/src/main/java/com/zto/fire/common/exception/FireSparkException.java
  class FireSparkException (line 26) | public class FireSparkException extends FireException {
    method FireSparkException (line 28) | public FireSparkException() {
    method FireSparkException (line 32) | public FireSparkException(String message) {
    method FireSparkException (line 36) | public FireSparkException(String message, Throwable cause) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/EncryptUtils.java
  class EncryptUtils (line 38) | public class EncryptUtils {
    method EncryptUtils (line 42) | private EncryptUtils() {}
    method base64Decrypt (line 47) | public static String base64Decrypt(String message) {
    method base64Encrypt (line 60) | public static String base64Encrypt(String message) {
    method md5Encrypt (line 73) | public static String md5Encrypt(String message) {
    method shaEncrypt (line 99) | public static String shaEncrypt(String message, String key) {
    method checkAuth (line 121) | public static boolean checkAuth(String auth, String privateKey) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/FileUtils.java
  class FileUtils (line 30) | public class FileUtils {
    method FileUtils (line 31) | private FileUtils() {}
    method findFile (line 41) | public static File findFile(String path, String fileName, List<File> f...
    method resourceFileExists (line 69) | public static InputStream resourceFileExists(String fileName) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/FindClassUtils.java
  class FindClassUtils (line 42) | public class FindClassUtils {
    method FindClassUtils (line 50) | private FindClassUtils() {
    method listPackageClasses (line 56) | public static List<Class<? extends Serializable>> listPackageClasses(S...
    method findClassLocal (line 81) | private static void findClassLocal(final String packName, final List<C...
    method findClassJar (line 115) | private static void findClassJar(final String packName, final List<Cla...
    method isJar (line 164) | public static boolean isJar() {
    method findFileInJar (line 175) | public static String findFileInJar(String fileName) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/HttpClientUtils.java
  class HttpClientUtils (line 35) | public class HttpClientUtils {
    method HttpClientUtils (line 40) | private HttpClientUtils() {
    method setHeaders (line 49) | private static void setHeaders(HttpMethodBase method, Header... header...
    method responseBody (line 62) | private static String responseBody(HttpMethodBase method) throws IOExc...
    method doGet (line 82) | public static String doGet(String url, Header... headers) throws IOExc...
    method doPost (line 112) | public static String doPost(String url, String json, Header... headers...
    method doPut (line 140) | public static String doPut(String url, String json, Header... headers)...
    method doGetIgnore (line 170) | public static String doGetIgnore(String url, Header... headers) {
    method doPostIgnore (line 186) | public static String doPostIgnore(String url, String json, Header... h...
    method doPutIgnore (line 202) | public static String doPutIgnore(String url, String json, Header... he...

FILE: fire-common/src/main/java/com/zto/fire/common/util/IOUtils.java
  class IOUtils (line 30) | public class IOUtils {
    method IOUtils (line 33) | private IOUtils() {}
    method close (line 38) | public static void close(Closeable... closeables) {
    method close (line 55) | public static void close(Process... process) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/MathUtils.java
  class MathUtils (line 27) | public class MathUtils {
    method MathUtils (line 29) | private MathUtils() {}
    method percent (line 39) | public static double percent(long molecule, long denominator, int scal...
    method doubleScale (line 53) | public static double doubleScale(double data, int scale) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/OSUtils.java
  class OSUtils (line 37) | public class OSUtils {
    method OSUtils (line 45) | private OSUtils() {
    method getHostLANAddress (line 51) | public static InetAddress getHostLANAddress() {
    method getIp (line 88) | public static String getIp() {
    method getHostName (line 103) | public static String getHostName() {
    method getRundomPort (line 117) | public static int getRundomPort() {
    method getPid (line 133) | public static String getPid() {
    method isLinux (line 143) | public static boolean isLinux() {
    method isWindows (line 150) | public static boolean isWindows() {
    method isLocal (line 159) | public static boolean isLocal() {
    method isMac (line 166) | public static boolean isMac() {

FILE: fire-common/src/main/java/com/zto/fire/common/util/ProcessUtil.java
  class ProcessUtil (line 33) | public class ProcessUtil {
    method ProcessUtil (line 36) | private ProcessUtil() {}
    method executeCmds (line 44) | public static void executeCmds(String... commands) {
    method executeCmdForLine (line 57) | public static String executeCmdForLine(String cmd) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/ReflectionUtils.java
  class ReflectionUtils (line 38) | public class ReflectionUtils {
    method ReflectionUtils (line 43) | private ReflectionUtils() {
    method setAccessible (line 46) | public static void setAccessible(Field field) {
    method setAccessible (line 52) | public static void setAccessible(Method method) {
    method forName (line 61) | public static Class<?> forName(String className) {
    method containsField (line 73) | public static boolean containsField(Class<?> clazz, String fieldName) {
    method getFields (line 81) | private static Map<String, Field> getFields(Class<?> clazz) {
    method getDeclaredFields (line 96) | private static Map<String, Field> getDeclaredFields(Class<?> clazz) {
    method getAllFields (line 112) | public static Map<String, Field> getAllFields(Class<?> clazz) {
    method getFieldByName (line 126) | public static Field getFieldByName(Class<?> clazz, String fieldName) {
    method getAllMethods (line 133) | public static Map<String, Method> getAllMethods(Class<?> clazz) {
    method getMethodByName (line 151) | public static Method getMethodByName(Class<?> clazz, String methodName) {
    method getMethodByName (line 162) | public static Method getMethodByName(String className, String methodNa...
    method containsMethod (line 170) | public static boolean containsMethod(Class<?> clazz, String methodName) {
    method getMethods (line 178) | private static Map<String, Method> getMethods(Class<?> clazz) {
    method getDeclaredMethods (line 193) | private static Map<String, Method> getDeclaredMethods(Class<?> clazz) {
    method getFieldType (line 209) | public static Class<?> getFieldType(Class<?> clazz, String fieldName) {
    method getAnnotation (line 235) | private static <T extends Annotation> Annotation getAnnotation(Class<?...
    method getAnnotations (line 260) | private static List<Annotation> getAnnotations(Class<?> clazz, Element...
    method getFieldAnnotation (line 282) | public static <T extends Annotation> Annotation getFieldAnnotation(Cla...
    method getFieldAnnotations (line 289) | public static List<Annotation> getFieldAnnotations(Class<?> clazz, Str...
    method getMethodAnnotation (line 296) | public static <T extends Annotation> Annotation getMethodAnnotation(Cl...
    method getMethodAnnotations (line 303) | public static List<Annotation> getMethodAnnotations(Class<?> clazz, St...
    method getClassAnnotation (line 310) | public static <T extends Annotation> Annotation getClassAnnotation(Cla...
    method getClassAnnotations (line 317) | public static List<Annotation> getClassAnnotations(Class<?> clazz) {
    method invokeAnnoMethod (line 330) | public static void invokeAnnoMethod(Object target, Class<? extends Ann...
    method invokeStepAnnoMethod (line 354) | public static void invokeStepAnnoMethod(Object target, Class<? extends...
    method getAnnoFieldValue (line 412) | public static Object getAnnoFieldValue(Annotation anno, String methodN...
    method getClassInJar (line 426) | public static String getClassInJar(Class clazz) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/StringsUtils.java
  class StringsUtils (line 32) | public class StringsUtils {
    method StringsUtils (line 33) | private StringsUtils() {
    method hrefTag (line 42) | public static String hrefTag(String str) {
    method brTag (line 52) | public static String brTag(String str) {
    method append (line 62) | public static String append(String... strs) {
    method replace (line 80) | public static String replace(String str, Map<String, String> map) {
    method toByteArray (line 95) | public static byte[] toByteArray(String hexString) {
    method toHexString (line 117) | public static String toHexString(byte[] byteArray) {
    method substring (line 138) | public static String substring(String str, int start, int end) {
    method isInt (line 155) | public static boolean isInt(String str) {
    method isLong (line 169) | public static boolean isLong(String str) {
    method isBoolean (line 187) | public static boolean isBoolean(String str) {
    method isFloat (line 196) | public static boolean isFloat(String str) {
    method isDouble (line 214) | public static boolean isDouble(String str) {
    method parseString (line 231) | public static Object parseString(String str) {
    method isNumeric (line 257) | public static boolean isNumeric(String str) {
    method randomSplit (line 273) | public static String randomSplit(String strs, String delimiter) {

FILE: fire-common/src/main/java/com/zto/fire/common/util/UnitFormatUtils.java
  class UnitFormatUtils (line 31) | public class UnitFormatUtils {
    type DateUnitEnum (line 36) | public enum DateUnitEnum {
    type TimeUnitEnum (line 48) | public enum TimeUnitEnum {
    method getIndex (line 64) | private static <T> int getIndex(List<T> orderList, T unit) {
    method init (line 76) | private static List<BigDecimal> init(int ... metrics) {
    method readable (line 92) | public static String readable(Number data, DateUnitEnum unit) {
    method format (line 120) | public static String format(Number data, DateUnitEnum fromUnit, DateUn...
    method readable (line 147) | public static String readable(Number data, TimeUnitEnum unit) {
    method format (line 175) | public static String format(Number data, TimeUnitEnum fromUnit, TimeUn...

FILE: fire-common/src/main/java/com/zto/fire/common/util/YarnUtils.java
  class YarnUtils (line 27) | public class YarnUtils {
    method YarnUtils (line 29) | private YarnUtils() {}
    method getAppId (line 35) | public static String getAppId(String log) {

FILE: fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/bean/HBaseBaseBean.java
  class HBaseBaseBean (line 28) | public abstract class HBaseBaseBean<T> implements Serializable {
    method buildRowKey (line 44) | public abstract T buildRowKey();
    method getRowKey (line 46) | public String getRowKey() {
    method setRowKey (line 50) | public void setRowKey(String rowKey) {

FILE: fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/bean/MultiVersionsBean.java
  class MultiVersionsBean (line 36) | public class MultiVersionsBean extends HBaseBaseBean<MultiVersionsBean> {
    method getMultiFields (line 54) | public String getMultiFields() {
    method setMultiFields (line 58) | public void setMultiFields(String multiFields) {
    method getTarget (line 62) | public HBaseBaseBean getTarget() {
    method setTarget (line 66) | public void setTarget(HBaseBaseBean<?> target) {
    method MultiVersionsBean (line 70) | public MultiVersionsBean(HBaseBaseBean<?> target) {
    method MultiVersionsBean (line 75) | public MultiVersionsBean() {
    method buildRowKey (line 79) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/ClickHouseDynamicTableFactory.java
  class ClickHouseDynamicTableFactory (line 63) | public class ClickHouseDynamicTableFactory
    method ClickHouseDynamicTableFactory (line 66) | public ClickHouseDynamicTableFactory() {}
    method createDynamicTableSink (line 68) | @Override
    method createDynamicTableSource (line 81) | @Override
    method factoryIdentifier (line 99) | @Override
    method requiredOptions (line 104) | @Override
    method optionalOptions (line 112) | @Override
    method validateConfigOptions (line 134) | private void validateConfigOptions(ReadableConfig config) {
    method getDmlOptions (line 156) | private ClickHouseDmlOptions getDmlOptions(ReadableConfig config) {
    method getReadOptions (line 174) | private ClickHouseReadOptions getReadOptions(ReadableConfig config) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/ClickHouseDynamicTableSink.java
  class ClickHouseDynamicTableSink (line 42) | public class ClickHouseDynamicTableSink implements DynamicTableSink, Sup...
    method ClickHouseDynamicTableSink (line 54) | public ClickHouseDynamicTableSink(
    method getChangelogMode (line 61) | @Override
    method validatePrimaryKey (line 71) | private void validatePrimaryKey(ChangelogMode requestedMode) {
    method getSinkRuntimeProvider (line 78) | @Override
    method applyStaticPartition (line 102) | @Override
    method requiresPartitionGrouping (line 112) | @Override
    method copy (line 118) | @Override
    method asSummaryString (line 127) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/ClickHouseDynamicTableSource.java
  class ClickHouseDynamicTableSource (line 40) | public class ClickHouseDynamicTableSource
    method ClickHouseDynamicTableSource (line 58) | public ClickHouseDynamicTableSource(
    method getChangelogMode (line 69) | @Override
    method getScanRuntimeProvider (line 74) | @Override
    method copy (line 90) | @Override
    method asSummaryString (line 100) | @Override
    method applyFilters (line 105) | @Override
    method applyLimit (line 111) | @Override
    method supportsNestedProjection (line 116) | @Override
    method applyProjection (line 121) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/catalog/ClickHouseCatalog.java
  class ClickHouseCatalog (line 84) | public class ClickHouseCatalog extends AbstractCatalog {
    method ClickHouseCatalog (line 102) | public ClickHouseCatalog(String catalogName, Map<String, String> prope...
    method ClickHouseCatalog (line 112) | public ClickHouseCatalog(
    method ClickHouseCatalog (line 121) | public ClickHouseCatalog(
    method open (line 146) | @Override
    method close (line 164) | @Override
    method getFactory (line 174) | @Override
    method listDatabases (line 181) | @Override
    method getDatabase (line 200) | @Override
    method databaseExists (line 210) | @Override
    method createDatabase (line 217) | @Override
    method dropDatabase (line 223) | @Override
    method alterDatabase (line 229) | @Override
    method listTables (line 237) | @Override
    method listViews (line 266) | @Override
    method getTable (line 272) | @Override
    method createTableSchema (line 311) | private synchronized TableSchema createTableSchema(String databaseName...
    method getPrimaryKeys (line 352) | private List<String> getPrimaryKeys(String databaseName, String tableN...
    method getPartitionKeys (line 378) | private List<String> getPartitionKeys(String databaseName, String tabl...
    method tableExists (line 400) | @Override
    method dropTable (line 410) | @Override
    method renameTable (line 416) | @Override
    method createTable (line 422) | @Override
    method alterTable (line 428) | @Override
    method listPartitions (line 437) | @Override
    method listPartitions (line 443) | @Override
    method listPartitionsByFilter (line 451) | @Override
    method getPartition (line 458) | @Override
    method partitionExists (line 464) | @Override
    method createPartition (line 470) | @Override
    method dropPartition (line 482) | @Override
    method alterPartition (line 489) | @Override
    method listFunctions (line 501) | @Override
    method getFunction (line 507) | @Override
    method functionExists (line 513) | @Override
    method createFunction (line 518) | @Override
    method alterFunction (line 525) | @Override
    method dropFunction (line 532) | @Override
    method getTableStatistics (line 540) | @Override
    method getTableColumnStatistics (line 546) | @Override
    method getPartitionStatistics (line 552) | @Override
    method getPartitionColumnStatistics (line 559) | @Override
    method alterTableStatistics (line 566) | @Override
    method alterTableColumnStatistics (line 573) | @Override
    method alterPartitionStatistics (line 582) | @Override
    method alterPartitionColumnStatistics (line 592) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/catalog/ClickHouseCatalogFactory.java
  class ClickHouseCatalogFactory (line 49) | public class ClickHouseCatalogFactory implements CatalogFactory {
    method factoryIdentifier (line 51) | @Override
    method requiredOptions (line 56) | @Override
    method optionalOptions (line 65) | @Override
    method createCatalog (line 87) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/config/ClickHouseConfig.java
  class ClickHouseConfig (line 21) | public class ClickHouseConfig {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/config/ClickHouseConfigOptions.java
  class ClickHouseConfigOptions (line 27) | public class ClickHouseConfigOptions {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/AbstractClickHouseInputFormat.java
  class AbstractClickHouseInputFormat (line 49) | public abstract class AbstractClickHouseInputFormat extends RichInputFor...
    method AbstractClickHouseInputFormat (line 64) | protected AbstractClickHouseInputFormat(
    method configure (line 79) | @Override
    method getStatistics (line 82) | @Override
    method getProducedType (line 87) | @Override
    method getInputSplitAssigner (line 92) | @Override
    method createGenericInputSplits (line 97) | protected InputSplit[] createGenericInputSplits(int splitNum) {
    method getQuery (line 105) | protected String getQuery(String table, String database) {
    class Builder (line 135) | public static class Builder {
      method withOptions (line 161) | public Builder withOptions(ClickHouseReadOptions readOptions) {
      method withConnectionProperties (line 166) | public Builder withConnectionProperties(Properties connectionPropert...
      method withFieldNames (line 171) | public Builder withFieldNames(String[] fieldNames) {
      method withFieldTypes (line 176) | public Builder withFieldTypes(DataType[] fieldTypes) {
      method withRowDataTypeInfo (line 181) | public Builder withRowDataTypeInfo(TypeInformation<RowData> rowDataT...
      method withFilterClause (line 186) | public Builder withFilterClause(String filterClause) {
      method withLimit (line 191) | public Builder withLimit(long limit) {
      method build (line 196) | public AbstractClickHouseInputFormat build() {
      method initShardInfo (line 220) | private int[] initShardInfo() {
      method initPartitionInfo (line 261) | private void initPartitionInfo(int[] shardIds) {
      method createShardInputFormat (line 286) | private AbstractClickHouseInputFormat createShardInputFormat(Logical...
      method createBatchOutputFormat (line 302) | private AbstractClickHouseInputFormat createBatchOutputFormat(Logica...

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/AbstractClickHouseOutputFormat.java
  class AbstractClickHouseOutputFormat (line 47) | public abstract class AbstractClickHouseOutputFormat extends RichOutputF...
    method AbstractClickHouseOutputFormat (line 62) | public AbstractClickHouseOutputFormat() {}
    method configure (line 64) | @Override
    method scheduledFlush (line 67) | public void scheduledFlush(long intervalMillis, String executorName) {
    method checkBeforeFlush (line 88) | public void checkBeforeFlush(final ClickHouseExecutor executor) throws...
    method close (line 97) | @Override
    method checkFlushException (line 118) | protected void checkFlushException() {
    method closeOutputFormat (line 124) | protected abstract void closeOutputFormat();
    class Builder (line 127) | public static class Builder {
      method Builder (line 142) | public Builder() {}
      method withOptions (line 144) | public AbstractClickHouseOutputFormat.Builder withOptions(ClickHouse...
      method withFieldDataTypes (line 149) | public AbstractClickHouseOutputFormat.Builder withFieldDataTypes(
      method withFieldNames (line 155) | public AbstractClickHouseOutputFormat.Builder withFieldNames(String[...
      method withPrimaryKey (line 160) | public AbstractClickHouseOutputFormat.Builder withPrimaryKey(UniqueC...
      method withPartitionKey (line 165) | public AbstractClickHouseOutputFormat.Builder withPartitionKey(List<...
      method build (line 170) | public AbstractClickHouseOutputFormat build() {
      method createBatchOutputFormat (line 188) | private ClickHouseBatchOutputFormat createBatchOutputFormat(LogicalT...
      method createShardOutputFormat (line 203) | private ClickHouseShardOutputFormat createShardOutputFormat(LogicalT...
      method listToStringArray (line 245) | private String[] listToStringArray(List<String> list) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseBatchInputFormat.java
  class ClickHouseBatchInputFormat (line 37) | public class ClickHouseBatchInputFormat extends AbstractClickHouseInputF...
    method ClickHouseBatchInputFormat (line 51) | public ClickHouseBatchInputFormat(
    method openInputFormat (line 67) | @Override
    method closeInputFormat (line 78) | @Override
    method open (line 95) | @Override
    method close (line 112) | @Override
    method reachedEnd (line 123) | @Override
    method nextRecord (line 128) | @Override
    method createInputSplits (line 144) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseBatchOutputFormat.java
  class ClickHouseBatchOutputFormat (line 33) | public class ClickHouseBatchOutputFormat extends AbstractClickHouseOutpu...
    method ClickHouseBatchOutputFormat (line 53) | protected ClickHouseBatchOutputFormat(
    method open (line 68) | @Override
    method writeRecord (line 92) | @Override
    method flush (line 107) | @Override
    method closeOutputFormat (line 115) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseShardInputFormat.java
  class ClickHouseShardInputFormat (line 43) | @Experimental
    method ClickHouseShardInputFormat (line 65) | public ClickHouseShardInputFormat(
    method configure (line 87) | @Override
    method openInputFormat (line 94) | @Override
    method closeInputFormat (line 97) | @Override
    method open (line 104) | @Override
    method close (line 140) | @Override
    method reachedEnd (line 166) | @Override
    method nextRecord (line 172) | @Override
    method nextValidResultSet (line 189) | private boolean nextValidResultSet() {
    method createInputSplits (line 208) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseShardOutputFormat.java
  class ClickHouseShardOutputFormat (line 42) | public class ClickHouseShardOutputFormat extends AbstractClickHouseOutpu...
    method ClickHouseShardOutputFormat (line 66) | protected ClickHouseShardOutputFormat(
    method open (line 85) | @Override
    method writeRecord (line 131) | @Override
    method writeRecordToOneExecutor (line 155) | private void writeRecordToOneExecutor(RowData record) throws IOExcepti...
    method flush (line 168) | @Override
    method flush (line 175) | private synchronized void flush(int index) throws IOException {
    method closeOutputFormat (line 182) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseStatementFactory.java
  class ClickHouseStatementFactory (line 28) | public class ClickHouseStatementFactory {
    method ClickHouseStatementFactory (line 32) | private ClickHouseStatementFactory() {}
    method getSelectStatement (line 34) | public static String getSelectStatement(
    method getInsertIntoStatement (line 44) | public static String getInsertIntoStatement(String tableName, String[]...
    method getUpdateStatement (line 61) | public static String getUpdateStatement(
    method getDeleteStatement (line 94) | public static String getDeleteStatement(
    method fromTableClause (line 114) | private static String fromTableClause(String tableName, String databas...
    method quoteIdentifier (line 122) | private static String quoteIdentifier(String identifier) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/common/DistributedEngineFullSchema.java
  class DistributedEngineFullSchema (line 27) | public class DistributedEngineFullSchema implements Serializable {
    method of (line 39) | public static DistributedEngineFullSchema of(String cluster, String da...
    method of (line 43) | public static DistributedEngineFullSchema of(
    method DistributedEngineFullSchema (line 48) | private DistributedEngineFullSchema(
    method DistributedEngineFullSchema (line 63) | private DistributedEngineFullSchema(String cluster, String database, S...
    method getCluster (line 67) | public String getCluster() {
    method getDatabase (line 71) | public String getDatabase() {
    method getTable (line 75) | public String getTable() {
    method getShardingKey (line 79) | public String getShardingKey() {
    method getPolicyName (line 83) | public String getPolicyName() {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/connection/ClickHouseConnectionProvider.java
  class ClickHouseConnectionProvider (line 46) | public class ClickHouseConnectionProvider implements Serializable {
    method ClickHouseConnectionProvider (line 72) | public ClickHouseConnectionProvider(ClickHouseConnectionOptions option...
    method ClickHouseConnectionProvider (line 76) | public ClickHouseConnectionProvider(
    method getOrCreateConnection (line 82) | public synchronized ClickHouseConnection getOrCreateConnection() throw...
    method createShardConnections (line 89) | public synchronized List<ClickHouseConnection> createShardConnections(
    method createAndStoreShardConnection (line 106) | public synchronized ClickHouseConnection createAndStoreShardConnection(
    method getShardUrls (line 117) | public List<String> getShardUrls(String remoteCluster) throws SQLExcep...
    method createConnection (line 135) | private ClickHouseConnection createConnection(String url, String datab...
    method closeConnections (line 150) | public void closeConnections() {
    method getActualHttpPort (line 174) | private int getActualHttpPort(String host, int port) throws SQLExcepti...

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/converter/ClickHouseConverterUtils.java
  class ClickHouseConverterUtils (line 47) | public class ClickHouseConverterUtils {
    method toExternal (line 51) | public static Object toExternal(Object value, LogicalType type) {
    method toInternal (line 120) | public static Object toInternal(Object value, LogicalType type) throws...

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/converter/ClickHouseRowConverter.java
  class ClickHouseRowConverter (line 39) | public class ClickHouseRowConverter implements Serializable {
    method ClickHouseRowConverter (line 49) | public ClickHouseRowConverter(RowType rowType) {
    method toInternal (line 62) | public RowData toInternal(ResultSet resultSet) throws SQLException {
    method toExternal (line 75) | public void toExternal(RowData rowData, ClickHousePreparedStatement st...
    method createToInternalConverter (line 86) | protected ClickHouseRowConverter.DeserializationConverter createToInte...
    method createToExternalConverter (line 137) | protected ClickHouseRowConverter.SerializationConverter createToExtern...
    type SerializationConverter (line 222) | @FunctionalInterface
      method serialize (line 228) | void serialize(RowData rowData, int index, ClickHousePreparedStateme...
    type DeserializationConverter (line 232) | @FunctionalInterface
      method deserialize (line 237) | Object deserialize(Object field) throws SQLException;

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/executor/ClickHouseBatchExecutor.java
  class ClickHouseBatchExecutor (line 23) | public class ClickHouseBatchExecutor implements ClickHouseExecutor {
    method ClickHouseBatchExecutor (line 39) | public ClickHouseBatchExecutor(
    method prepareStatement (line 46) | @Override
    method prepareStatement (line 51) | @Override
    method setRuntimeContext (line 58) | @Override
    method addToBatch (line 61) | @Override
    method executeBatch (line 80) | @Override
    method closeStatement (line 85) | @Override
    method toString (line 98) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/executor/ClickHouseExecutor.java
  type ClickHouseExecutor (line 33) | public interface ClickHouseExecutor extends Serializable {
    method prepareStatement (line 37) | void prepareStatement(ClickHouseConnection connection) throws SQLExcep...
    method prepareStatement (line 39) | void prepareStatement(ClickHouseConnectionProvider connectionProvider)...
    method setRuntimeContext (line 41) | void setRuntimeContext(RuntimeContext context);
    method addToBatch (line 43) | void addToBatch(RowData rowData) throws SQLException;
    method executeBatch (line 45) | void executeBatch() throws SQLException;
    method closeStatement (line 47) | void closeStatement();
    method attemptExecuteBatch (line 49) | default void attemptExecuteBatch(ClickHousePreparedStatement stmt, int...
    method createClickHouseExecutor (line 71) | static ClickHouseExecutor createClickHouseExecutor(
    method createBatchExecutor (line 95) | static ClickHouseBatchExecutor createBatchExecutor(
    method createUpsertExecutor (line 105) | static ClickHouseUpsertExecutor createUpsertExecutor(
    method createExtractor (line 156) | static Function<RowData, RowData> createExtractor(LogicalType[] logica...

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/executor/ClickHouseUpsertExecutor.java
  class ClickHouseUpsertExecutor (line 25) | public class ClickHouseUpsertExecutor implements ClickHouseExecutor {
    method ClickHouseUpsertExecutor (line 57) | public ClickHouseUpsertExecutor(
    method prepareStatement (line 78) | @Override
    method prepareStatement (line 85) | @Override
    method setRuntimeContext (line 92) | @Override
    method addToBatch (line 95) | @Override
    method executeBatch (line 120) | @Override
    method closeStatement (line 130) | @Override
    method toString (line 144) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/options/ClickHouseConnectionOptions.java
  class ClickHouseConnectionOptions (line 26) | public class ClickHouseConnectionOptions implements Serializable {
    method ClickHouseConnectionOptions (line 40) | protected ClickHouseConnectionOptions(
    method getUrl (line 53) | public String getUrl() {
    method getUsername (line 57) | public Optional<String> getUsername() {
    method getPassword (line 61) | public Optional<String> getPassword() {
    method getDatabaseName (line 65) | public String getDatabaseName() {
    method getTableName (line 69) | public String getTableName() {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/options/ClickHouseDmlOptions.java
  class ClickHouseDmlOptions (line 25) | public class ClickHouseDmlOptions extends ClickHouseConnectionOptions {
    method ClickHouseDmlOptions (line 43) | private ClickHouseDmlOptions(
    method getBatchSize (line 66) | public int getBatchSize() {
    method getFlushInterval (line 70) | public Duration getFlushInterval() {
    method getMaxRetries (line 74) | public int getMaxRetries() {
    method isUseLocal (line 78) | public boolean isUseLocal() {
    method getPartitionStrategy (line 82) | public String getPartitionStrategy() {
    method getPartitionKey (line 86) | public String getPartitionKey() {
    method getIgnoreDelete (line 90) | public boolean getIgnoreDelete() {
    class Builder (line 95) | public static class Builder {
      method Builder (line 110) | public Builder() {}
      method withUrl (line 112) | public ClickHouseDmlOptions.Builder withUrl(String url) {
      method withUsername (line 117) | public ClickHouseDmlOptions.Builder withUsername(String username) {
      method withPassword (line 122) | public ClickHouseDmlOptions.Builder withPassword(String password) {
      method withDatabaseName (line 127) | public ClickHouseDmlOptions.Builder withDatabaseName(String database...
      method withTableName (line 132) | public ClickHouseDmlOptions.Builder withTableName(String tableName) {
      method withBatchSize (line 137) | public ClickHouseDmlOptions.Builder withBatchSize(int batchSize) {
      method withFlushInterval (line 142) | public ClickHouseDmlOptions.Builder withFlushInterval(Duration flush...
      method withMaxRetries (line 147) | public ClickHouseDmlOptions.Builder withMaxRetries(int maxRetries) {
      method withWriteLocal (line 152) | public ClickHouseDmlOptions.Builder withWriteLocal(Boolean writeLoca...
      method withUseLocal (line 157) | public ClickHouseDmlOptions.Builder withUseLocal(Boolean useLocal) {
      method withPartitionStrategy (line 162) | public ClickHouseDmlOptions.Builder withPartitionStrategy(String par...
      method withPartitionKey (line 167) | public ClickHouseDmlOptions.Builder withPartitionKey(String partitio...
      method withIgnoreDelete (line 172) | public ClickHouseDmlOptions.Builder withIgnoreDelete(boolean ignoreD...
      method build (line 177) | public ClickHouseDmlOptions build() {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/options/ClickHouseReadOptions.java
  class ClickHouseReadOptions (line 23) | public class ClickHouseReadOptions extends ClickHouseConnectionOptions {
    method ClickHouseReadOptions (line 34) | private ClickHouseReadOptions(
    method isUseLocal (line 53) | public boolean isUseLocal() {
    method getPartitionColumn (line 57) | public String getPartitionColumn() {
    method getPartitionNum (line 61) | public Integer getPartitionNum() {
    method getPartitionLowerBound (line 65) | public Long getPartitionLowerBound() {
    method getPartitionUpperBound (line 69) | public Long getPartitionUpperBound() {
    class Builder (line 74) | public static class Builder {
      method withUrl (line 86) | public ClickHouseReadOptions.Builder withUrl(String url) {
      method withUsername (line 91) | public ClickHouseReadOptions.Builder withUsername(String username) {
      method withPassword (line 96) | public ClickHouseReadOptions.Builder withPassword(String password) {
      method withDatabaseName (line 101) | public ClickHouseReadOptions.Builder withDatabaseName(String databas...
      method withTableName (line 106) | public ClickHouseReadOptions.Builder withTableName(String tableName) {
      method withUseLocal (line 111) | public ClickHouseReadOptions.Builder withUseLocal(boolean useLocal) {
      method withPartitionColumn (line 116) | public ClickHouseReadOptions.Builder withPartitionColumn(String part...
      method withPartitionNum (line 121) | public ClickHouseReadOptions.Builder withPartitionNum(Integer partit...
      method withPartitionLowerBound (line 126) | public Builder withPartitionLowerBound(Long partitionLowerBound) {
      method withPartitionUpperBound (line 131) | public Builder withPartitionUpperBound(Long partitionUpperBound) {
      method build (line 136) | public ClickHouseReadOptions build() {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/BalancedPartitioner.java
  class BalancedPartitioner (line 11) | public class BalancedPartitioner implements ClickHousePartitioner {
    method BalancedPartitioner (line 17) | public BalancedPartitioner() {}
    method select (line 19) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/ClickHousePartitioner.java
  type ClickHousePartitioner (line 14) | public interface ClickHousePartitioner extends Serializable {
    method select (line 22) | int select(RowData record, int numShards);
    method createBalanced (line 24) | static ClickHousePartitioner createBalanced() {
    method createShuffle (line 28) | static ClickHousePartitioner createShuffle() {
    method createHash (line 32) | static ClickHousePartitioner createHash(FieldGetter getter) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/HashPartitioner.java
  class HashPartitioner (line 14) | public class HashPartitioner implements ClickHousePartitioner {
    method HashPartitioner (line 20) | public HashPartitioner(FieldGetter getter) {
    method select (line 24) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/ShufflePartitioner.java
  class ShufflePartitioner (line 13) | public class ShufflePartitioner implements ClickHousePartitioner {
    method ShufflePartitioner (line 17) | public ShufflePartitioner() {}
    method select (line 19) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseBatchBetweenParametersProvider.java
  class ClickHouseBatchBetweenParametersProvider (line 23) | public class ClickHouseBatchBetweenParametersProvider extends ClickHouse...
    method ClickHouseBatchBetweenParametersProvider (line 25) | public ClickHouseBatchBetweenParametersProvider(long minVal, long maxV...
    method ofBatchNum (line 29) | @Override
    method calculate (line 41) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseBetweenParametersProvider.java
  class ClickHouseBetweenParametersProvider (line 26) | public abstract class ClickHouseBetweenParametersProvider extends ClickH...
    method ClickHouseBetweenParametersProvider (line 34) | public ClickHouseBetweenParametersProvider(long minVal, long maxVal) {
    method getParameterClause (line 40) | @Override
    method divideParameterValues (line 45) | protected Serializable[][] divideParameterValues(int batchNum) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseParametersProvider.java
  class ClickHouseParametersProvider (line 23) | public abstract class ClickHouseParametersProvider {
    method getParameterValues (line 30) | public Serializable[][] getParameterValues() {
    method getShardIdValues (line 35) | public Serializable[][] getShardIdValues() {
    method getParameterClause (line 39) | public abstract String getParameterClause();
    method ofBatchNum (line 41) | public abstract ClickHouseParametersProvider ofBatchNum(Integer batchN...
    method calculate (line 43) | public abstract ClickHouseParametersProvider calculate();
    method allocateShards (line 47) | protected int[] allocateShards(int minBatchSize, int minBatchNum, int ...
    method subShardIds (line 59) | protected Integer[] subShardIds(int start, int idNum, int[] shardIds) {
    class Builder (line 68) | public static class Builder {
      method setMinVal (line 80) | public Builder setMinVal(Long minVal) {
      method setMaxVal (line 85) | public Builder setMaxVal(Long maxVal) {
      method setBatchNum (line 90) | public Builder setBatchNum(Integer batchNum) {
      method setShardIds (line 95) | public Builder setShardIds(int[] shardIds) {
      method setUseLocal (line 100) | public Builder setUseLocal(boolean useLocal) {
      method build (line 105) | public ClickHouseParametersProvider build() {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseShardBetweenParametersProvider.java
  class ClickHouseShardBetweenParametersProvider (line 30) | @Experimental
    method ClickHouseShardBetweenParametersProvider (line 36) | public ClickHouseShardBetweenParametersProvider(long minVal, long maxV...
    method ofBatchNum (line 44) | @Override
    method calculate (line 56) | @Override
    method repeatShardId (line 90) | private Integer[][] repeatShardId(int shardId, int shardNum) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseShardTableParametersProvider.java
  class ClickHouseShardTableParametersProvider (line 29) | @Experimental
    method ClickHouseShardTableParametersProvider (line 35) | public ClickHouseShardTableParametersProvider(int[] shardIds) {
    method getParameterClause (line 41) | @Override
    method ofBatchNum (line 46) | @Override
    method calculate (line 58) | @Override

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/ClickHouseTypeUtil.java
  class ClickHouseTypeUtil (line 32) | public class ClickHouseTypeUtil {
    method toFlinkType (line 37) | public static DataType toFlinkType(ClickHouseColumnInfo clickHouseColu...
    method getInternalClickHouseType (line 117) | private static String getInternalClickHouseType(String clickHouseTypeL...

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/ClickHouseUtil.java
  class ClickHouseUtil (line 40) | public class ClickHouseUtil {
    method getJdbcUrl (line 63) | public static String getJdbcUrl(String url, @Nullable String database) {
    method getAndParseDistributedEngineSchema (line 73) | public static DistributedEngineFullSchema getAndParseDistributedEngine...
    method getClickHouseProperties (line 125) | public static Properties getClickHouseProperties(Map<String, String> t...
    method toFixedDateTimestamp (line 139) | public static Timestamp toFixedDateTimestamp(LocalTime localTime) {
    method quoteIdentifier (line 144) | public static String quoteIdentifier(String identifier) {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/FilterPushDownHelper.java
  class FilterPushDownHelper (line 56) | public class FilterPushDownHelper {
    method FilterPushDownHelper (line 73) | private FilterPushDownHelper() {}
    method convert (line 75) | public static String convert(List<ResolvedExpression> filters) {
    method convertExpression (line 84) | private static Optional<String> convertExpression(
    method convertOnlyChild (line 122) | private static Optional<String> convertOnlyChild(
    method convertLogicExpression (line 139) | private static Optional<String> convertLogicExpression(
    method convertFieldAndLiteral (line 159) | private static Optional<String> convertFieldAndLiteral(
    method convertLiteral (line 191) | private static Optional<String> convertLiteral(ValueLiteralExpression ...
    method getFlinkTimeZone (line 226) | private static TimeZone getFlinkTimeZone() {

FILE: fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/SqlClause.java
  type SqlClause (line 23) | public enum SqlClause {
    method SqlClause (line 46) | SqlClause(final Function<String[], String> function) {

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/RocketMQSourceWithTag.java
  class RocketMQSourceWithTag (line 56) | public class RocketMQSourceWithTag<OUT> extends RichParallelSourceFuncti...
    method RocketMQSourceWithTag (line 85) | public RocketMQSourceWithTag(TagKeyValueDeserializationSchema<OUT> sch...
    method open (line 90) | @Override
    method run (line 124) | @Override
    method awaitTermination (line 204) | private void awaitTermination() throws InterruptedException {
    method getMessageQueueOffset (line 210) | private long getMessageQueueOffset(MessageQueue mq) throws MQClientExc...
    method putMessageQueueOffset (line 242) | private void putMessageQueueOffset(MessageQueue mq, long offset) throw...
    method cancel (line 250) | @Override
    method close (line 269) | @Override
    method snapshotState (line 280) | @Override
    method initializeState (line 314) | @Override
    method getProducedType (line 341) | @Override
    method notifyCheckpointComplete (line 346) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/common/serialization/JsonDeserializationSchema.java
  class JsonDeserializationSchema (line 31) | public class JsonDeserializationSchema implements TagKeyValueDeserializa...
    method JsonDeserializationSchema (line 35) | public JsonDeserializationSchema(DeserializationSchema<RowData> key, D...
    method deserializeTagKeyAndValue (line 40) | @Override
    method getProducedType (line 55) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/common/serialization/SimpleTagKeyValueDeserializationSchema.java
  class SimpleTagKeyValueDeserializationSchema (line 32) | public class SimpleTagKeyValueDeserializationSchema implements TagKeyVal...
    method deserializeTagKeyAndValue (line 34) | @Override
    method getProducedType (line 42) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/common/serialization/TagKeyValueDeserializationSchema.java
  type TagKeyValueDeserializationSchema (line 28) | public interface TagKeyValueDeserializationSchema<T> extends ResultTypeQ...
    method deserializeTagKeyAndValue (line 30) | T deserializeTagKeyAndValue(byte[] tag, byte[] key, byte[] value);

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.13/org/apache/rocketmq/flink/RocketMQSourceWithTag.java
  class RocketMQSourceWithTag (line 56) | public class RocketMQSourceWithTag<OUT> extends RichParallelSourceFuncti...
    method RocketMQSourceWithTag (line 85) | public RocketMQSourceWithTag(TagKeyValueDeserializationSchema<OUT> sch...
    method open (line 90) | @Override
    method run (line 124) | @Override
    method awaitTermination (line 204) | private void awaitTermination() throws InterruptedException {
    method getMessageQueueOffset (line 210) | private long getMessageQueueOffset(MessageQueue mq) throws MQClientExc...
    method putMessageQueueOffset (line 242) | private void putMessageQueueOffset(MessageQueue mq, long offset) throw...
    method cancel (line 250) | @Override
    method close (line 269) | @Override
    method snapshotState (line 280) | @Override
    method initializeState (line 314) | @Override
    method getProducedType (line 341) | @Override
    method notifyCheckpointComplete (line 346) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.13/org/apache/rocketmq/flink/common/serialization/JsonDeserializationSchema.java
  class JsonDeserializationSchema (line 31) | public class JsonDeserializationSchema implements TagKeyValueDeserializa...
    method JsonDeserializationSchema (line 35) | public JsonDeserializationSchema(DeserializationSchema<RowData> key, D...
    method deserializeTagKeyAndValue (line 40) | @Override
    method getProducedType (line 55) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.13/org/apache/rocketmq/flink/common/serialization/SimpleTagKeyValueDeserializationSchema.java
  class SimpleTagKeyValueDeserializationSchema (line 32) | public class SimpleTagKeyValueDeserializationSchema implements TagKeyVal...
    method deserializeTagKeyAndValue (line 34) | @Override
    method getProducedType (line 42) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.13/org/apache/rocketmq/flink/common/serialization/TagKeyValueDeserializationSchema.java
  type TagKeyValueDeserializationSchema (line 28) | public interface TagKeyValueDeserializationSchema<T> extends ResultTypeQ...
    method deserializeTagKeyAndValue (line 30) | T deserializeTagKeyAndValue(byte[] tag, byte[] key, byte[] value);

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.14/org/apache/rocketmq/flink/RocketMQSourceWithTag.java
  class RocketMQSourceWithTag (line 56) | public class RocketMQSourceWithTag<OUT> extends RichParallelSourceFuncti...
    method RocketMQSourceWithTag (line 85) | public RocketMQSourceWithTag(TagKeyValueDeserializationSchema<OUT> sch...
    method open (line 90) | @Override
    method run (line 124) | @Override
    method awaitTermination (line 204) | private void awaitTermination() throws InterruptedException {
    method getMessageQueueOffset (line 210) | private long getMessageQueueOffset(MessageQueue mq) throws MQClientExc...
    method putMessageQueueOffset (line 242) | private void putMessageQueueOffset(MessageQueue mq, long offset) throw...
    method cancel (line 250) | @Override
    method close (line 269) | @Override
    method snapshotState (line 280) | @Override
    method initializeState (line 314) | @Override
    method getProducedType (line 341) | @Override
    method notifyCheckpointComplete (line 346) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.14/org/apache/rocketmq/flink/common/serialization/JsonDeserializationSchema.java
  class JsonDeserializationSchema (line 31) | public class JsonDeserializationSchema implements TagKeyValueDeserializa...
    method JsonDeserializationSchema (line 35) | public JsonDeserializationSchema(DeserializationSchema<RowData> key, D...
    method deserializeTagKeyAndValue (line 40) | @Override
    method getProducedType (line 55) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.14/org/apache/rocketmq/flink/common/serialization/SimpleTagKeyValueDeserializationSchema.java
  class SimpleTagKeyValueDeserializationSchema (line 32) | public class SimpleTagKeyValueDeserializationSchema implements TagKeyVal...
    method deserializeTagKeyAndValue (line 34) | @Override
    method getProducedType (line 42) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.14/org/apache/rocketmq/flink/common/serialization/TagKeyValueDeserializationSchema.java
  type TagKeyValueDeserializationSchema (line 28) | public interface TagKeyValueDeserializationSchema<T> extends ResultTypeQ...
    method deserializeTagKeyAndValue (line 30) | T deserializeTagKeyAndValue(byte[] tag, byte[] key, byte[] value);

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQConfig.java
  class RocketMQConfig (line 38) | public class RocketMQConfig {
    method buildProducerConfigs (line 114) | public static void buildProducerConfigs(Properties props, DefaultMQPro...
    method buildConsumerConfigs (line 136) | public static void buildConsumerConfigs(Properties props, DefaultMQPul...
    method buildCommonConfigs (line 150) | public static void buildCommonConfigs(Properties props, ClientConfig c...
    method buildAclRPCHook (line 167) | public static AclClientRPCHook buildAclRPCHook(Properties props) {

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQSink.java
  class RocketMQSink (line 51) | public class RocketMQSink<IN> extends RichSinkFunction<IN> implements Ch...
    method RocketMQSink (line 70) | public RocketMQSink(KeyValueSerializationSchema<IN> schema, TopicSelec...
    method open (line 86) | @Override
    method invoke (line 110) | @Override
    method prepareMessage (line 154) | private Message prepareMessage(IN input) {
    method withAsync (line 172) | public RocketMQSink<IN> withAsync(boolean async) {
    method withBatchFlushOnCheckpoint (line 177) | public RocketMQSink<IN> withBatchFlushOnCheckpoint(boolean batchFlushO...
    method withBatchSize (line 182) | public RocketMQSink<IN> withBatchSize(int batchSize) {
    method close (line 187) | @Override
    method flushSync (line 200) | private void flushSync() throws Exception {
    method snapshotState (line 211) | @Override
    method initializeState (line 216) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQSinkWithTag.java
  class RocketMQSinkWithTag (line 51) | public class RocketMQSinkWithTag<IN> extends RichSinkFunction<IN> implem...
    method RocketMQSinkWithTag (line 70) | public RocketMQSinkWithTag(JsonSerializationSchema schema, TopicSelect...
    method open (line 86) | @Override
    method invoke (line 110) | @Override
    method prepareMessage (line 154) | private Message prepareMessage(IN input) {
    method withAsync (line 175) | public RocketMQSinkWithTag<IN> withAsync(boolean async) {
    method withBatchFlushOnCheckpoint (line 180) | public RocketMQSinkWithTag<IN> withBatchFlushOnCheckpoint(boolean batc...
    method withBatchSize (line 185) | public RocketMQSinkWithTag<IN> withBatchSize(int batchSize) {
    method close (line 190) | @Override
    method flushSync (line 203) | private void flushSync() throws Exception {
    method snapshotState (line 214) | @Override
    method initializeState (line 219) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQSource.java
  class RocketMQSource (line 51) | public class RocketMQSource<OUT> extends RichParallelSourceFunction<OUT>
    method RocketMQSource (line 80) | public RocketMQSource(KeyValueDeserializationSchema<OUT> schema, Prope...
    method open (line 85) | @Override
    method run (line 119) | @Override
    method awaitTermination (line 204) | private void awaitTermination() throws InterruptedException {
    method getMessageQueueOffset (line 210) | private long getMessageQueueOffset(MessageQueue mq) throws MQClientExc...
    method putMessageQueueOffset (line 241) | private void putMessageQueueOffset(MessageQueue mq, long offset) throw...
    method cancel (line 248) | @Override
    method close (line 267) | @Override
    method snapshotState (line 278) | @Override
    method initializeState (line 312) | @Override
    method getProducedType (line 342) | @Override
    method notifyCheckpointComplete (line 347) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQUtils.java
  class RocketMQUtils (line 23) | public final class RocketMQUtils {
    method getInteger (line 25) | public static int getInteger(Properties props, String key, int default...
    method getLong (line 29) | public static long getLong(Properties props, String key, long defaultV...
    method getBoolean (line 33) | public static boolean getBoolean(Properties props, String key, boolean...

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RunningChecker.java
  class RunningChecker (line 23) | public class RunningChecker implements Serializable {
    method isRunning (line 26) | public boolean isRunning() {
    method setRunning (line 30) | public void setRunning(boolean running) {

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/selector/DefaultTopicSelector.java
  class DefaultTopicSelector (line 21) | public class DefaultTopicSelector<T> implements TopicSelector<T> {
    method DefaultTopicSelector (line 25) | public DefaultTopicSelector(final String topicName, final String tagNa...
    method DefaultTopicSelector (line 30) | public DefaultTopicSelector(final String topicName) {
    method getTopic (line 34) | @Override
    method getTag (line 39) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/selector/SimpleTopicSelector.java
  class SimpleTopicSelector (line 29) | public class SimpleTopicSelector implements TopicSelector<Map> {
    method SimpleTopicSelector (line 45) | public SimpleTopicSelector(String topicFieldName, String defaultTopicN...
    method getTopic (line 52) | @Override
    method getTag (line 63) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/selector/TopicSelector.java
  type TopicSelector (line 23) | public interface TopicSelector<T> extends Serializable {
    method getTopic (line 25) | String getTopic(T tuple);
    method getTag (line 27) | String getTag(T tuple);

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/JsonSerializationSchema.java
  class JsonSerializationSchema (line 13) | public class JsonSerializationSchema implements TagKeyValueSerialization...
    method JsonSerializationSchema (line 25) | public JsonSerializationSchema(
    method JsonSerializationSchema (line 34) | public JsonSerializationSchema(
    method open (line 47) | @Override
    method serialize (line 52) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/KeyValueDeserializationSchema.java
  type KeyValueDeserializationSchema (line 25) | public interface KeyValueDeserializationSchema<T> extends ResultTypeQuer...
    method deserializeKeyAndValue (line 26) | T deserializeKeyAndValue(byte[] key, byte[] value);

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/KeyValueSerializationSchema.java
  type KeyValueSerializationSchema (line 23) | public interface KeyValueSerializationSchema<T> extends Serializable {
    method serializeKey (line 25) | byte[] serializeKey(T tuple);
    method serializeValue (line 27) | byte[] serializeValue(T tuple);

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/SimpleKeyValueDeserializationSchema.java
  class SimpleKeyValueDeserializationSchema (line 27) | public class SimpleKeyValueDeserializationSchema implements KeyValueDese...
    method SimpleKeyValueDeserializationSchema (line 34) | public SimpleKeyValueDeserializationSchema() {
    method SimpleKeyValueDeserializationSchema (line 43) | public SimpleKeyValueDeserializationSchema(String keyField, String val...
    method deserializeKeyAndValue (line 48) | @Override
    method getProducedType (line 62) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/SimpleKeyValueSerializationSchema.java
  class SimpleKeyValueSerializationSchema (line 24) | public class SimpleKeyValueSerializationSchema implements KeyValueSerial...
    method SimpleKeyValueSerializationSchema (line 31) | public SimpleKeyValueSerializationSchema() {
    method SimpleKeyValueSerializationSchema (line 40) | public SimpleKeyValueSerializationSchema(String keyField, String value...
    method serializeKey (line 45) | @Override
    method serializeValue (line 54) | @Override

FILE: fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/TagKeyValueSerializationSchema.java
  type TagKeyValueSerializationSchema (line 30) | public interface TagKeyValueSerializationSchema<T> extends Serializable {
    method open (line 32) | default void open(SerializationSchema.InitializationContext context) t...
    method serialize (line 35) | Message serialize(T element);

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/client/ConnFactoryExtend.java
  class ConnFactoryExtend (line 23) | public class ConnFactoryExtend extends ConnectionFactory implements Seri...
    method ConnFactoryExtend (line 24) | public ConnFactoryExtend() {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java
  class ConnectionFactory (line 65) | @InterfaceAudience.Public
    method ConnectionFactory (line 71) | protected ConnectionFactory() {
    method createConnection (line 98) | public static Connection createConnection() throws IOException {
    method createConnection (line 127) | public static Connection createConnection(Configuration conf) throws I...
    method createConnection (line 157) | public static Connection createConnection(Configuration conf, Executor...
    method createConnection (line 188) | public static Connection createConnection(Configuration conf, User user)
    method createConnection (line 220) | public static Connection createConnection(Configuration conf, Executor...
    method createConnection (line 230) | static Connection createConnection(final Configuration conf, final boo...
    method cleanup (line 254) | public static void cleanup() {
    method cleanupInstance (line 259) | public void cleanupInstance() {
    method getConnection (line 263) | public static Connection getConnection(final Configuration conf) throw...
    method getConnectionInstance (line 267) | public Connection getConnectionInstance(final Configuration conf) thro...
    method removeEldestEntry (line 279) | @Override
    method getConnectionInternal (line 287) | private static Connection getConnectionInternal(final Configuration conf)
    method getConnectionInternalInstance (line 304) | private Connection getConnectionInternalInstance(final Configuration c...
    method deleteConnection (line 321) | private static void deleteConnection(HConnectionKey connectionKey) {
    method deleteAllConnections (line 338) | private static void deleteAllConnections() {
    method deleteAllConnectionsInstance (line 349) | private void deleteAllConnectionsInstance() {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/SparkSQLPushDownFilter.java
  class SparkSQLPushDownFilter (line 42) | public class SparkSQLPushDownFilter extends FilterBase{
    method SparkSQLPushDownFilter (line 57) | public SparkSQLPushDownFilter(DynamicLogicExpression dynamicLogicExpre...
    method SparkSQLPushDownFilter (line 67) | public SparkSQLPushDownFilter(DynamicLogicExpression dynamicLogicExpre...
    method filterKeyValue (line 99) | @Override
    method filterRow (line 153) | @Override
    method parseFrom (line 174) | @SuppressWarnings("unused")
    method toByteArray (line 232) | public byte[] toByteArray() {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkDeleteExample.java
  class JavaHBaseBulkDeleteExample (line 37) | final public class JavaHBaseBulkDeleteExample {
    method JavaHBaseBulkDeleteExample (line 39) | private JavaHBaseBulkDeleteExample() {}
    method main (line 41) | public static void main(String[] args) {
    class DeleteFunction (line 74) | public static class DeleteFunction implements Function<byte[], Delete> {
      method call (line 76) | public Delete call(byte[] v) throws Exception {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkGetExample.java
  class JavaHBaseBulkGetExample (line 40) | final public class JavaHBaseBulkGetExample {
    method JavaHBaseBulkGetExample (line 42) | private JavaHBaseBulkGetExample() {}
    method main (line 44) | public static void main(String[] args) {
    class GetFunction (line 76) | public static class GetFunction implements Function<byte[], Get> {
      method call (line 80) | public Get call(byte[] v) throws Exception {
    class ResultFunction (line 85) | public static class ResultFunction implements Function<Result, String> {
      method call (line 89) | public String call(Result result) throws Exception {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkPutExample.java
  class JavaHBaseBulkPutExample (line 37) | final public class JavaHBaseBulkPutExample {
    method JavaHBaseBulkPutExample (line 39) | private JavaHBaseBulkPutExample() {}
    method main (line 41) | public static void main(String[] args) {
    class PutFunction (line 76) | public static class PutFunction implements Function<String, Put> {
      method call (line 80) | public Put call(String v) throws Exception {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseDistributedScan.java
  class JavaHBaseDistributedScan (line 40) | final public class JavaHBaseDistributedScan {
    method JavaHBaseDistributedScan (line 42) | private JavaHBaseDistributedScan() {}
    method main (line 44) | public static void main(String[] args) {
    class ScanConvertFunction (line 74) | private static class ScanConvertFunction implements
      method call (line 76) | @Override

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseMapGetPutExample.java
  class JavaHBaseMapGetPutExample (line 46) | final public class JavaHBaseMapGetPutExample {
    method JavaHBaseMapGetPutExample (line 48) | private JavaHBaseMapGetPutExample() {}
    method main (line 50) | public static void main(String[] args) {
    class GetFunction (line 99) | public static class GetFunction implements Function<byte[], Get> {
      method call (line 101) | public Get call(byte[] v) throws Exception {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseStreamingBulkPutExample.java
  class JavaHBaseStreamingBulkPutExample (line 35) | final public class JavaHBaseStreamingBulkPutExample {
    method JavaHBaseStreamingBulkPutExample (line 37) | private JavaHBaseStreamingBulkPutExample() {}
    method main (line 39) | public static void main(String[] args) {
    class PutFunction (line 75) | public static class PutFunction implements Function<String, Put> {
      method call (line 79) | public Put call(String v) throws Exception {

FILE: fire-connectors/spark-connectors/spark-hbase/src/main/java/org/apache/hadoop/hbase/spark/protobuf/generated/FilterProtos.java
  class FilterProtos (line 6) | public final class FilterProtos {
    method FilterProtos (line 7) | private FilterProtos() {}
    method registerAllExtensions (line 8) | public static void registerAllExtensions(
    type SQLPredicatePushDownCellToColumnMappingOrBuilder (line 11) | public interface SQLPredicatePushDownCellToColumnMappingOrBuilder
      method hasColumnFamily (line 18) | boolean hasColumnFamily();
      method getColumnFamily (line 22) | com.google.protobuf.ByteString getColumnFamily();
      method hasQualifier (line 28) | boolean hasQualifier();
      method getQualifier (line 32) | com.google.protobuf.ByteString getQualifier();
      method hasColumnName (line 38) | boolean hasColumnName();
      method getColumnName (line 42) | java.lang.String getColumnName();
      method getColumnNameBytes (line 46) | com.google.protobuf.ByteString
    class SQLPredicatePushDownCellToColumnMapping (line 52) | public static final class SQLPredicatePushDownCellToColumnMapping extends
      method SQLPredicatePushDownCellToColumnMapping (line 56) | private SQLPredicatePushDownCellToColumnMapping(com.google.protobuf....
      method SQLPredicatePushDownCellToColumnMapping (line 60) | private SQLPredicatePushDownCellToColumnMapping(boolean noInit) { th...
      method getDefaultInstance (line 63) | public static SQLPredicatePushDownCellToColumnMapping getDefaultInst...
      method getDefaultInstanceForType (line 67) | public SQLPredicatePushDownCellToColumnMapping getDefaultInstanceFor...
      method getUnknownFields (line 72) | @java.lang.Override
      method SQLPredicatePushDownCellToColumnMapping (line 77) | private SQLPredicatePushDownCellToColumnMapping(
      method getDescriptor (line 127) | public static final com.google.protobuf.Descriptors.Descriptor
      method internalGetFieldAccessorTable (line 132) | protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
      method parsePartialFrom (line 141) | public SQLPredicatePushDownCellToColumnMapping parsePartialFrom(
      method getParserForType (line 149) | @java.lang.Override
      method hasColumnFamily (line 161) | public boolean hasColumnFamily() {
      method getColumnFamily (line 167) | public com.google.protobuf.ByteString getColumnFamily() {
      method hasQualifier (line 177) | public boolean hasQualifier() {
      method getQualifier (line 183) | public com.google.protobuf.ByteString getQualifier() {
      method hasColumnName (line 193) | public boolean hasColumnName() {
      method getColumnName (line 199) | public java.lang.String getColumnName() {
      method getColumnNameBytes (line 216) | public com.google.protobuf.ByteString
      method initFields (line 230) | private void initFields() {
      method isInitialized (line 236) | public final boolean isInitialized() {
      method writeTo (line 256) | public void writeTo(com.google.protobuf.CodedOutputStream output)
      method getSerializedSize (line 272) | public int getSerializedSize() {
      method writeReplace (line 295) | @java.lang.Override
      method equals (line 301) | @java.lang.Override
      method hashCode (line 333) | @java.lang.Override
      method parseFrom (line 357) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 362) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 368) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 372) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 378) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 382) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseDelimitedFrom (line 388) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseDelimitedFrom (line 392) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 398) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 403) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method newBuilder (line 410) | public static Builder newBuilder() { return Builder.create(); }
      method newBuilderForType (line 411) | public Builder newBuilderForType() { return newBuilder(); }
      method newBuilder (line 412) | public static Builder newBuilder(org.apache.hadoop.hbase.spark.proto...
      method toBuilder (line 415) | public Builder toBuilder() { return newBuilder(this); }
      method newBuilderForType (line 417) | @java.lang.Override
      class Builder (line 426) | public static final class Builder extends
        method getDescriptor (line 429) | public static final com.google.protobuf.Descriptors.Descriptor
        method internalGetFieldAccessorTable (line 434) | protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
        method Builder (line 442) | private Builder() {
        method Builder (line 446) | private Builder(
        method maybeForceBuilderInitialization (line 451) | private void maybeForceBuilderInitialization() {
        method create (line 455) | private static Builder create() {
        method clear (line 459) | public Builder clear() {
        method clone (line 470) | public Builder clone() {
        method getDescriptorForType (line 474) | public com.google.protobuf.Descriptors.Descriptor
        method getDefaultInstanceForType (line 479) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method build (line 483) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method buildPartial (line 491) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method mergeFrom (line 512) | public Builder mergeFrom(com.google.protobuf.Message other) {
        method mergeFrom (line 521) | public Builder mergeFrom(org.apache.hadoop.hbase.spark.protobuf.ge...
        method isInitialized (line 538) | public final boolean isInitialized() {
        method mergeFrom (line 554) | public Builder mergeFrom(
        method hasColumnFamily (line 578) | public boolean hasColumnFamily() {
        method getColumnFamily (line 584) | public com.google.protobuf.ByteString getColumnFamily() {
        method setColumnFamily (line 590) | public Builder setColumnFamily(com.google.protobuf.ByteString valu...
        method clearColumnFamily (line 602) | public Builder clearColumnFamily() {
        method hasQualifier (line 614) | public boolean hasQualifier() {
        method getQualifier (line 620) | public com.google.protobuf.ByteString getQualifier() {
        method setQualifier (line 626) | public Builder setQualifier(com.google.protobuf.ByteString value) {
        method clearQualifier (line 638) | public Builder clearQualifier() {
        method hasColumnName (line 650) | public boolean hasColumnName() {
        method getColumnName (line 656) | public java.lang.String getColumnName() {
        method getColumnNameBytes (line 670) | public com.google.protobuf.ByteString
        method setColumnName (line 686) | public Builder setColumnName(
        method clearColumnName (line 699) | public Builder clearColumnName() {
        method setColumnNameBytes (line 708) | public Builder setColumnNameBytes(
    type SQLPredicatePushDownFilterOrBuilder (line 730) | public interface SQLPredicatePushDownFilterOrBuilder
      method hasDynamicLogicExpression (line 737) | boolean hasDynamicLogicExpression();
      method getDynamicLogicExpression (line 741) | java.lang.String getDynamicLogicExpression();
      method getDynamicLogicExpressionBytes (line 745) | com.google.protobuf.ByteString
      method getValueFromQueryArrayList (line 752) | java.util.List<com.google.protobuf.ByteString> getValueFromQueryArra...
      method getValueFromQueryArrayCount (line 756) | int getValueFromQueryArrayCount();
      method getValueFromQueryArray (line 760) | com.google.protobuf.ByteString getValueFromQueryArray(int index);
      method getCellToColumnMappingList (line 766) | java.util.List<org.apache.hadoop.hbase.spark.protobuf.generated.Filt...
      method getCellToColumnMapping (line 771) | org.apache.hadoop.hbase.spark.protobuf.generated.FilterProtos.SQLPre...
      method getCellToColumnMappingCount (line 775) | int getCellToColumnMappingCount();
      method getCellToColumnMappingOrBuilderList (line 779) | java.util.List<? extends org.apache.hadoop.hbase.spark.protobuf.gene...
      method getCellToColumnMappingOrBuilder (line 784) | org.apache.hadoop.hbase.spark.protobuf.generated.FilterProtos.SQLPre...
    class SQLPredicatePushDownFilter (line 790) | public static final class SQLPredicatePushDownFilter extends
      method SQLPredicatePushDownFilter (line 794) | private SQLPredicatePushDownFilter(com.google.protobuf.GeneratedMess...
      method SQLPredicatePushDownFilter (line 798) | private SQLPredicatePushDownFilter(boolean noInit) { this.unknownFie...
      method getDefaultInstance (line 801) | public static SQLPredicatePushDownFilter getDefaultInstance() {
      method getDefaultInstanceForType (line 805) | public SQLPredicatePushDownFilter getDefaultInstanceForType() {
      method getUnknownFields (line 810) | @java.lang.Override
      method SQLPredicatePushDownFilter (line 815) | private SQLPredicatePushDownFilter(
      method getDescriptor (line 877) | public static final com.google.protobuf.Descriptors.Descriptor
      method internalGetFieldAccessorTable (line 882) | protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
      method parsePartialFrom (line 891) | public SQLPredicatePushDownFilter parsePartialFrom(
      method getParserForType (line 899) | @java.lang.Override
      method hasDynamicLogicExpression (line 911) | public boolean hasDynamicLogicExpression() {
      method getDynamicLogicExpression (line 917) | public java.lang.String getDynamicLogicExpression() {
      method getDynamicLogicExpressionBytes (line 934) | public com.google.protobuf.ByteString
      method getValueFromQueryArrayList (line 954) | public java.util.List<com.google.protobuf.ByteString>
      method getValueFromQueryArrayCount (line 961) | public int getValueFromQueryArrayCount() {
      method getValueFromQueryArray (line 967) | public com.google.protobuf.ByteString getValueFromQueryArray(int ind...
      method getCellToColumnMappingList (line 977) | public java.util.List<org.apache.hadoop.hbase.spark.protobuf.generat...
      method getCellToColumnMappingOrBuilderList (line 983) | public java.util.List<? extends org.apache.hadoop.hbase.spark.protob...
      method getCellToColumnMappingCount (line 990) | public int getCellToColumnMappingCount() {
      method getCellToColumnMapping (line 996) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProtos...
      method getCellToColumnMappingOrBuilder (line 1002) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProtos...
      method initFields (line 1007) | private void initFields() {
      method isInitialized (line 1013) | public final boolean isInitialized() {
      method writeTo (line 1031) | public void writeTo(com.google.protobuf.CodedOutputStream output)
      method getSerializedSize (line 1047) | public int getSerializedSize() {
      method writeReplace (line 1075) | @java.lang.Override
      method equals (line 1081) | @java.lang.Override
      method hashCode (line 1107) | @java.lang.Override
      method parseFrom (line 1131) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1136) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1142) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1146) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1152) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1156) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseDelimitedFrom (line 1162) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseDelimitedFrom (line 1166) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1172) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method parseFrom (line 1177) | public static org.apache.hadoop.hbase.spark.protobuf.generated.Filte...
      method newBuilder (line 1184) | public static Builder newBuilder() { return Builder.create(); }
      method newBuilderForType (line 1185) | public Builder newBuilderForType() { return newBuilder(); }
      method newBuilder (line 1186) | public static Builder newBuilder(org.apache.hadoop.hbase.spark.proto...
      method toBuilder (line 1189) | public Builder toBuilder() { return newBuilder(this); }
      method newBuilderForType (line 1191) | @java.lang.Override
      class Builder (line 1200) | public static final class Builder extends
        method getDescriptor (line 1203) | public static final com.google.protobuf.Descriptors.Descriptor
        method internalGetFieldAccessorTable (line 1208) | protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
        method Builder (line 1216) | private Builder() {
        method Builder (line 1220) | private Builder(
        method maybeForceBuilderInitialization (line 1225) | private void maybeForceBuilderInitialization() {
        method create (line 1230) | private static Builder create() {
        method clear (line 1234) | public Builder clear() {
        method clone (line 1249) | public Builder clone() {
        method getDescriptorForType (line 1253) | public com.google.protobuf.Descriptors.Descriptor
        method getDefaultInstanceForType (line 1258) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method build (line 1262) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method buildPartial (line 1270) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method mergeFrom (line 1297) | public Builder mergeFrom(com.google.protobuf.Message other) {
        method mergeFrom (line 1306) | public Builder mergeFrom(org.apache.hadoop.hbase.spark.protobuf.ge...
        method isInitialized (line 1353) | public final boolean isInitialized() {
        method mergeFrom (line 1367) | public Builder mergeFrom(
        method hasDynamicLogicExpression (line 1391) | public boolean hasDynamicLogicExpression() {
        method getDynamicLogicExpression (line 1397) | public java.lang.String getDynamicLogicExpression() {
        method getDynamicLogicExpressionBytes (line 1411) | public com.google.protobuf.ByteString
        method setDynamicLogicExpression (line 1427) | public Builder setDynamicLogicExpression(
        method clearDynamicLogicExpression (line 1440) | public Builder clearDynamicLogicExpression() {
        method setDynamicLogicExpressionBytes (line 1449) | public Builder setDynamicLogicExpressionBytes(
        method ensureValueFromQueryArrayIsMutable (line 1462) | private void ensureValueFromQueryArrayIsMutable() {
        method getValueFromQueryArrayList (line 1471) | public java.util.List<com.google.protobuf.ByteString>
        method getValueFromQueryArrayCount (line 1478) | public int getValueFromQueryArrayCount() {
        method getValueFromQueryArray (line 1484) | public com.google.protobuf.ByteString getValueFromQueryArray(int i...
        method setValueFromQueryArray (line 1490) | public Builder setValueFromQueryArray(
        method addValueFromQueryArray (line 1503) | public Builder addValueFromQueryArray(com.google.protobuf.ByteStri...
        method addAllValueFromQueryArray (line 1515) | public Builder addAllValueFromQueryArray(
        method clearValueFromQueryArray (line 1525) | public Builder clearValueFromQueryArray() {
        method ensureCellToColumnMappingIsMutable (line 1535) | private void ensureCellToColumnMappingIsMutable() {
        method getCellToColumnMappingList (line 1548) | public java.util.List<org.apache.hadoop.hbase.spark.protobuf.gener...
        method getCellToColumnMappingCount (line 1558) | public int getCellToColumnMappingCount() {
        method getCellToColumnMapping (line 1568) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method setCellToColumnMapping (line 1578) | public Builder setCellToColumnMapping(
        method setCellToColumnMapping (line 1595) | public Builder setCellToColumnMapping(
        method addCellToColumnMapping (line 1609) | public Builder addCellToColumnMapping(org.apache.hadoop.hbase.spar...
        method addCellToColumnMapping (line 1625) | public Builder addCellToColumnMapping(
        method addCellToColumnMapping (line 1642) | public Builder addCellToColumnMapping(
        method addCellToColumnMapping (line 1656) | public Builder addCellToColumnMapping(
        method addAllCellToColumnMapping (line 1670) | public Builder addAllCellToColumnMapping(
        method clearCellToColumnMapping (line 1684) | public Builder clearCellToColumnMapping() {
        method removeCellToColumnMapping (line 1697) | public Builder removeCellToColumnMapping(int index) {
        method getCellToColumnMappingBuilder (line 1710) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method getCellToColumnMappingOrBuilder (line 1717) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method getCellToColumnMappingOrBuilderList (line 1727) | public java.util.List<? extends org.apache.hadoop.hbase.spark.prot...
        method addCellToColumnMappingBuilder (line 1738) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method addCellToColumnMappingBuilder (line 1745) | public org.apache.hadoop.hbase.spark.protobuf.generated.FilterProt...
        method getCellToColumnMappingBuilderList (line 1753) | public java.util.List<org.apache.hadoop.hbase.spark.protobuf.gener...
        method getCellToColumnMappingFieldBuilder (line 1757) | private com.google.protobuf.RepeatedFieldBuilder<
    method getDescriptor (line 1794) | public static com.google.protobuf.Descriptors.FileDescriptor
    method assignDescriptors (line 1815) | public com.google.protobuf.ExtensionRegistry assignDescriptors(

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/OffsetCommitCallback.java
  type OffsetCommitCallback (line 26) | public interface OffsetCommitCallback {
    method onComplete (line 33) | void onComplete(Map<MessageQueue, Long> offsets, Exception exception);

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/RocketMQConfig.java
  class RocketMQConfig (line 33) | public class RocketMQConfig {
    method buildConsumerConfigs (line 124) | public static void buildConsumerConfigs(Properties props, DefaultMQPus...
    method buildCommonConfigs (line 162) | public static void buildCommonConfigs(Properties props, ClientConfig c...
    method getInteger (line 180) | public static int getInteger(Properties props, String key, int default...
    method getBoolean (line 184) | public static boolean getBoolean(Properties props, String key, boolean...

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/TopicQueueId.java
  class TopicQueueId (line 20) | public final class TopicQueueId implements Serializable {
    method TopicQueueId (line 26) | public TopicQueueId(String topic, int queueId) {
    method queueId (line 31) | public int queueId() {
    method topic (line 35) | public String topic() {
    method hashCode (line 39) | @Override
    method equals (line 52) | @Override
    method clone (line 79) | @Override
    method toString (line 84) | @Override

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/streaming/DefaultMessageRetryManager.java
  class DefaultMessageRetryManager (line 29) | public class DefaultMessageRetryManager implements MessageRetryManager{
    method DefaultMessageRetryManager (line 35) | public DefaultMessageRetryManager(BlockingQueue<MessageSet> queue, fin...
    method ack (line 56) | @Override
    method fail (line 61) | @Override
    method mark (line 79) | @Override
    method needRetry (line 85) | @Override
    method setCache (line 91) | public void setCache(Map<String,MessageSet> cache) {

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/streaming/MessageRetryManager.java
  type MessageRetryManager (line 23) | public interface MessageRetryManager {
    method ack (line 28) | void ack(String id);
    method fail (line 34) | void fail(String id);
    method mark (line 40) | void mark(MessageSet messageSet);
    method needRetry (line 47) | boolean needRetry(MessageSet messageSet);

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/streaming/MessageSet.java
  class MessageSet (line 31) | public class MessageSet implements Iterator<Message>, Serializable{
    method MessageSet (line 38) | public MessageSet(String id, List<MessageExt> data) {
    method MessageSet (line 44) | public MessageSet(List<MessageExt> data) {
    method getId (line 48) | public String getId() {
    method getData (line 52) | public List<MessageExt> getData() {
    method getTimestamp (line 56) | public long getTimestamp() {
    method setTimestamp (line 60) | public void setTimestamp(long timestamp) {
    method getRetries (line 64) | public int getRetries() {
    method setRetries (line 68) | public void setRetries(int retries) {
    method hasNext (line 72) | @Override
    method next (line 77) | @Override
    method remove (line 82) | @Override
    method toString (line 87) | @Override

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/streaming/ReliableRocketMQReceiver.java
  class ReliableRocketMQReceiver (line 32) | public class ReliableRocketMQReceiver extends RocketMQReceiver {
    method ReliableRocketMQReceiver (line 37) | public ReliableRocketMQReceiver(Properties properties, StorageLevel st...
    method onStart (line 41) | @Override
    method process (line 58) | @Override
    method ack (line 72) | public void ack(Object msgId) {
    method fail (line 77) | public void fail(Object msgId) {
    method onStop (line 82) | @Override
    class MessageSender (line 87) | class MessageSender extends Thread {
      method run (line 88) | @Override

FILE: fire-connectors/spark-connectors/spark-rocketmq/src/main/java/org/apache/rocketmq/spark/streaming/RocketMQReceiver.java
  class RocketMQReceiver (line 42) | public class RocketMQReceiver extends Receiver<Message> {
    method RocketMQReceiver (line 47) | public RocketMQReceiver(Properties properties, StorageLevel storageLev...
    method onStart (line 52) | @Override
    method process (line 93) | public boolean process(List<MessageExt> msgs) {
    method onStop (line 107) | @Override

FILE: fire-core/src/main/java/com/zto/fire/core/TimeCost.java
  class TimeCost (line 33) | public class TimeCost implements Serializable {
    method getId (line 64) | public String getId() {
    method getLoad (line 68) | public String getLoad() {
    method getMsg (line 72) | public String getMsg() {
    method getTimeCost (line 76) | public Long getTimeCost() {
    method getStartTime (line 83) | public String getStartTime() {
    method setStartTime (line 87) | public void setStartTime(String startTime) {
    method getEndTime (line 91) | public String getEndTime() {
    method setEndTime (line 95) | public void setEndTime(String endTime) {
    method getIp (line 99) | public String getIp() {
    method getStageId (line 103) | public Integer getStageId() {
    method getTaskId (line 107) | public Long getTaskId() {
    method getPartitionId (line 111) | public Integer getPartitionId() {
    method getIsFire (line 115) | public Boolean getIsFire() {
    method getApplicationId (line 119) | public static String getApplicationId() {
    method setApplicationId (line 123) | public static void setApplicationId(String applicationId) {
    method getExecutorId (line 127) | public static String getExecutorId() {
    method getMainClass (line 131) | public static String getMainClass() {
    method setExecutorId (line 135) | public static void setExecutorId(String executorId) {
    method setMainClass (line 139) | public static void setMainClass(String mainClass) {
    method setMsg (line 143) | public void setMsg(String msg) {
    method setTimeCost (line 147) | public void setTimeCost(Long timeCost) {
    method getFire (line 151) | public Boolean getFire() {
    method setFire (line 155) | public void setFire(Boolean fire) {
    method setIp (line 159) | public void setIp(String ip) {
    method setLoad (line 163) | public void setLoad(String load) {
    method setStageId (line 167) | public void setStageId(Integer stageId) {
    method setTaskId (line 171) | public void setTaskId(Long taskId) {
    method setPartitionId (line 175) | public void setPartitionId(Integer partitionId) {
    method getStart (line 179) | public Long getStart() {
    method setStart (line 183) | public void setStart(Long start) {
    method getStackTraceInfo (line 187) | public String getStackTraceInfo() {
    method setStackTraceInfo (line 191) | public void setStackTraceInfo(String stackTraceInfo) {
    method getModule (line 195) | public String getModule() {
    method getIo (line 199) | public Integer getIo() {
    method getLevel (line 203) | public String getLevel() {
    method setLevel (line 207) | public void setLevel(String level) {
    method getCpuUsage (line 211) | public String getCpuUsage() {
    method setCpuUsage (line 215) | public void setCpuUsage(String cpuUsage) {
    method lable (line 219) | private String lable() {
    method toString (line 227) | @Override
    method TimeCost (line 239) | private TimeCost() {
    method build (line 250) | public static TimeCost build() {
    method info (line 259) | public TimeCost info(String msg, String module, Integer io, Boolean is...

FILE: fire-core/src/main/java/com/zto/fire/core/bean/ArthasParam.java
  class ArthasParam (line 26) | public class ArthasParam {
    method ArthasParam (line 31) | public ArthasParam() {
    method ArthasParam (line 34) | public ArthasParam(String command, Boolean distribute, String ip) {
    method getCommand (line 40) | public String getCommand() {
    method setCommand (line 44) | public void setCommand(String command) {
    method getDistribute (line 48) | public Boolean getDistribute() {
    method setDistribute (line 52) | public void setDistribute(Boolean distribute) {
    method getIp (line 56) | public String getIp() {
    method setIp (line 60) | public void setIp(String ip) {

FILE: fire-core/src/main/java/com/zto/fire/core/task/SchedulerManager.java
  class SchedulerManager (line 48) | public abstract class SchedulerManager implements Serializable {
    method SchedulerManager (line 76) | protected SchedulerManager() {}
    method init (line 81) | protected static void init() {
    method addScanTask (line 102) | protected void addScanTask(Object... tasks) {
    method label (line 117) | protected abstract String label();
    method registerTasks (line 126) | public synchronized void registerTasks(Object... taskInstances) {
    method buildSchedulerInfo (line 214) | protected String buildSchedulerInfo(Scheduled anno) {
    method execute (line 242) | public static void execute(JobExecutionContext context) {
    method schedulerIsStarted (line 266) | public synchronized boolean schedulerIsStarted() {
    method shutdown (line 283) | public static synchronized void shutdown(boolean waitForJobsToComplete) {

FILE: fire-core/src/main/java/com/zto/fire/core/task/TaskRunner.java
  class TaskRunner (line 31) | public class TaskRunner implements Job, Serializable {
    method execute (line 32) | @Override

FILE: fire-core/src/main/java/com/zto/fire/core/task/TaskRunnerQueue.java
  class TaskRunnerQueue (line 27) | @DisallowConcurrentExecution

FILE: fire-engines/fire-flink/src/main/java/com/zto/fire/flink/bean/CheckpointParams.java
  class CheckpointParams (line 8) | public class CheckpointParams {
    method getInterval (line 25) | public Long getInterval() {
    method setInterval (line 29) | public void setInterval(Long interval) {
    method getTimeout (line 33) | public Long getTimeout() {
    method setTimeout (line 37) | public void setTimeout(Long timeout) {
    method getMinPauseBetween (line 41) | public Long getMinPauseBetween() {
    method setMinPauseBetween (line 45) | public void setMinPauseBetween(Long minPauseBetween) {

FILE: fire-engines/fire-flink/src/main/java/com/zto/fire/flink/bean/DistributeBean.java
  class DistributeBean (line 28) | public class DistributeBean {
    method DistributeBean (line 32) | public DistributeBean() {
    method DistributeBean (line 36) | public DistributeBean(DistributeModule module, String json) {
    method getModule (line 41) | public DistributeModule getModule() {
    method setModule (line 45) | public void setModule(DistributeModule module) {
    method getJson (line 49) | public String getJson() {
    method setJson (line 53) | public void setJson(String json) {

FILE: fire-engines/fire-flink/src/main/java/com/zto/fire/flink/bean/FlinkTableSchema.java
  class FlinkTableSchema (line 37) | public class FlinkTableSchema implements Serializable {
    method FlinkTableSchema (line 42) | public FlinkTableSchema(TableSchema schema) {
    method FlinkTableSchema (line 46) | private FlinkTableSchema(String[] fieldNames, DataType[] fieldDataType...
    method getFieldDataTypes (line 80) | public DataType[] getFieldDataTypes() {
    method getFieldTypes (line 91) | public TypeInformation<?>[] getFieldTypes() {
    method getFieldDataType (line 100) | public Optional<DataType> getFieldDataType(int fieldIndex) {
    method getFieldType (line 114) | public Optional<TypeInformation<?>> getFieldType(int fieldIndex) {
    method getFieldDataType (line 124) | public Optional<DataType> getFieldDataType(String fieldName) {
    method getFieldType (line 138) | public Optional<TypeInformation<?>> getFieldType(String fieldName) {
    method getFieldCount (line 146) | public int getFieldCount() {
    method getFieldNames (line 153) | public String[] getFieldNames() {
    method getFieldName (line 162) | public Optional<String> getFieldName(int fieldIndex) {
    method toString (line 169) | @Override
    method equals (line 179) | @Override
    method hashCode (line 192) | @Override

FILE: fire-engines/fire-flink/src/main/java/com/zto/fire/flink/enu/DistributeModule.java
  type DistributeModule (line 11) | public enum DistributeModule {
    method DistributeModule (line 14) | DistributeModule(String type) {
    method parse (line 20) | public static DistributeModule parse(String type) {

FILE: fire-engines/fire-flink/src/main/java/com/zto/fire/flink/ext/watermark/FirePeriodicWatermarks.java
  class FirePeriodicWatermarks (line 30) | public abstract class FirePeriodicWatermarks<T> implements AssignerWithP...
    method FirePeriodicWatermarks (line 38) | protected FirePeriodicWatermarks() {
    method FirePeriodicWatermarks (line 46) | protected FirePeriodicWatermarks(long maxOutOfOrder) {
    method getCurrentWatermark (line 56) | @Override

FILE: fire-engines/fire-flink/src/main/java/com/zto/fire/flink/task/FlinkSchedulerManager.java
  class FlinkSchedulerManager (line 29) | public class FlinkSchedulerManager extends SchedulerManager {
    method FlinkSchedulerManager (line 37) | private FlinkSchedulerManager() {
    method getInstance (line 43) | public static SchedulerManager getInstance() {
    method label (line 47) | @Override

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/bean/ColumnMeta.java
  class ColumnMeta (line 25) | public class ColumnMeta {
    method ColumnMeta (line 43) | public ColumnMeta() {
    method ColumnMeta (line 46) | private ColumnMeta(Builder builder) {
    method getDatabase (line 57) | public String getDatabase() {
    method getTableName (line 61) | public String getTableName() {
    method getDescription (line 65) | public String getDescription() {
    method getColumnName (line 69) | public String getColumnName() {
    method getDataType (line 73) | public String getDataType() {
    method getNullable (line 77) | public Boolean getNullable() {
    method getPartition (line 81) | public Boolean getPartition() {
    method getBucket (line 85) | public Boolean getBucket() {
    class Builder (line 89) | public static class Builder extends ColumnMeta {
      method setDescription (line 90) | public Builder setDescription(String description) {
      method setColumnName (line 95) | public Builder setColumnName(String columnName) {
      method setDataType (line 100) | public Builder setDataType(String dataType) {
      method setNullable (line 105) | public Builder setNullable(Boolean nullable) {
      method setPartition (line 110) | public Builder setPartition(Boolean partition) {
      method setBucket (line 115) | public Builder setBucket(Boolean bucket) {
      method setDatabase (line 120) | public Builder setDatabase(String database) {
      method setTableName (line 125) | public Builder setTableName(String tableName) {
      method build (line 130) | public ColumnMeta build() {

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/bean/FunctionMeta.java
  class FunctionMeta (line 24) | public class FunctionMeta {
    method FunctionMeta (line 36) | public FunctionMeta() {
    method FunctionMeta (line 39) | public FunctionMeta(String description, String database, String name, ...
    method getDescription (line 47) | public String getDescription() {
    method setDescription (line 51) | public void setDescription(String description) {
    method getDatabase (line 55) | public String getDatabase() {
    method setDatabase (line 59) | public void setDatabase(String database) {
    method getName (line 63) | public String getName() {
    method setName (line 67) | public void setName(String name) {
    method getClassName (line 71) | public String getClassName() {
    method setClassName (line 75) | public void setClassName(String className) {
    method getTemporary (line 79) | public Boolean getTemporary() {
    method setTemporary (line 83) | public void setTemporary(Boolean temporary) {

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/bean/GenerateBean.java
  type GenerateBean (line 12) | public interface GenerateBean<T extends GenerateBean<T>> {
    method generate (line 18) | List<T> generate();

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/bean/RestartParams.java
  class RestartParams (line 27) | public class RestartParams {
    method getBatchDuration (line 39) | public long getBatchDuration() {
    method setBatchDuration (line 43) | public void setBatchDuration(long batchDuration) {
    method isRestartSparkContext (line 47) | public boolean isRestartSparkContext() {
    method setRestartSparkContext (line 51) | public void setRestartSparkContext(boolean restartSparkContext) {
    method getSparkConf (line 55) | public Map<String, String> getSparkConf() {
    method setSparkConf (line 59) | public void setSparkConf(Map<String, String> sparkConf) {
    method RestartParams (line 63) | public RestartParams() {
    method isStopGracefully (line 66) | public boolean isStopGracefully() {
    method setStopGracefully (line 70) | public void setStopGracefully(boolean stopGracefully) {
    method isCheckPoint (line 74) | public boolean isCheckPoint() {
    method setCheckPoint (line 78) | public void setCheckPoint(boolean checkPoint) {
    method RestartParams (line 82) | public RestartParams(long batchDuration, boolean restartSparkContext, ...

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/bean/SparkInfo.java
  class SparkInfo (line 28) | public class SparkInfo {
    method getAppName (line 164) | public String getAppName() {
    method setAppName (line 168) | public void setAppName(String appName) {
    method getClassName (line 172) | public String getClassName() {
    method setClassName (line 176) | public void setClassName(String className) {
    method getFireVersion (line 180) | public String getFireVersion() {
    method setFireVersion (line 184) | public void setFireVersion(String fireVersion) {
    method getConf (line 188) | public Map<String, String> getConf() {
    method setConf (line 192) | public void setConf(Map<String, String> conf) {
    method getVersion (line 196) | public String getVersion() {
    method setVersion (line 200) | public void setVersion(String version) {
    method getMaster (line 204) | public String getMaster() {
    method setMaster (line 208) | public void setMaster(String master) {
    method getApplicationId (line 212) | public String getApplicationId() {
    method setApplicationId (line 216) | public void setApplicationId(String applicationId) {
    method getApplicationAttemptId (line 220) | public String getApplicationAttemptId() {
    method setApplicationAttemptId (line 224) | public void setApplicationAttemptId(String applicationAttemptId) {
    method getUi (line 228) | public String getUi() {
    method setUi (line 232) | public void setUi(String ui) {
    method getPid (line 236) | public String getPid() {
    method setPid (line 240) | public void setPid(String pid) {
    method getUptime (line 244) | public String getUptime() {
    method setUptime (line 248) | public void setUptime(String uptime) {
    method getLaunchTime (line 252) | public String getLaunchTime() {
    method setLaunchTime (line 256) | public void setLaunchTime(String launchTime) {
    method getExecutorMemory (line 260) | public String getExecutorMemory() {
    method setExecutorMemory (line 264) | public void setExecutorMemory(String executorMemory) {
    method getExecutorInstances (line 268) | public String getExecutorInstances() {
    method setExecutorInstances (line 272) | public void setExecutorInstances(String executorInstances) {
    method getExecutorCores (line 276) | public String getExecutorCores() {
    method setExecutorCores (line 280) | public void setExecutorCores(String executorCores) {
    method getDriverCores (line 284) | public String getDriverCores() {
    method setDriverCores (line 288) | public void setDriverCores(String driverCores) {
    method getDriverMemory (line 292) | public String getDriverMemory() {
    method setDriverMemory (line 296) | public void setDriverMemory(String driverMemory) {
    method getDriverMemoryOverhead (line 300) | public String getDriverMemoryOverhead() {
    method setDriverMemoryOverhead (line 304) | public void setDriverMemoryOverhead(String driverMemoryOverhead) {
    method getDriverHost (line 308) | public String getDriverHost() {
    method setDriverHost (line 312) | public void setDriverHost(String driverHost) {
    method getDriverPort (line 316) | public String getDriverPort() {
    method setDriverPort (line 320) | public void setDriverPort(String driverPort) {
    method getExecutorMemoryOverhead (line 324) | public String getExecutorMemoryOverhead() {
    method setExecutorMemoryOverhead (line 328) | public void setExecutorMemoryOverhead(String executorMemoryOverhead) {
    method getMemory (line 332) | public String getMemory() {
    method setMemory (line 336) | public void setMemory(String memory) {
    method getCpu (line 340) | public String getCpu() {
    method setCpu (line 344) | public void setCpu(String cpu) {
    method getBatchDuration (line 348) | public String getBatchDuration() {
    method setBatchDuration (line 352) | public void setBatchDuration(String batchDuration) {
    method getTimestamp (line 356) | public String getTimestamp() {
    method setTimestamp (line 360) | public void setTimestamp(String timestamp) {
    method getRestPort (line 364) | public String getRestPort() {
    method setRestPort (line 368) | public void setRestPort(String restPort) {
    method getProperties (line 372) | public Map<String, String> getProperties() {
    method setProperties (line 376) | public void setProperties(Map<String, String> properties) {
    method computeCpuMemory (line 383) | public void computeCpuMemory() {

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/bean/TableMeta.java
  class TableMeta (line 24) | public class TableMeta {
    method getDescription (line 36) | public String getDescription() {
    method setDescription (line 40) | public void setDescription(String description) {
    method getDatabase (line 44) | public String getDatabase() {
    method setDatabase (line 48) | public void setDatabase(String database) {
    method getTableName (line 52) | public String getTableName() {
    method setTableName (line 56) | public void setTableName(String tableName) {
    method getTableType (line 60) | public String getTableType() {
    method setTableType (line 64) | public void setTableType(String tableType) {
    method getTemporary (line 68) | public Boolean getTemporary() {
    method setTemporary (line 72) | public void setTemporary(Boolean temporary) {
    method TableMeta (line 76) | public TableMeta() {
    method TableMeta (line 79) | public TableMeta(String description, String database, String tableName...

FILE: fire-engines/fire-spark/src/main/java/com/zto/fire/spark/task/SparkSchedulerManager.java
  class SparkSchedulerManager (line 30) | public class SparkSchedulerManager extends SchedulerManager {
    method SparkSchedulerManager (line 38) | private SparkSchedulerManager() {}
    method getInstance (line 43) | public static SchedulerManager getInstance() {
    method label (line 47) | @Override

FILE: fire-enhance/apache-arthas/src/main/java/com/taobao/arthas/agent/attach/ArthasAgent.java
  class ArthasAgent (line 18) | public class ArthasAgent {
    method ArthasAgent (line 37) | public ArthasAgent() {
    method ArthasAgent (line 41) | public ArthasAgent(Map<String, String> configMap) {
    method ArthasAgent (line 45) | public ArthasAgent(String arthasHome) {
    method ArthasAgent (line 49) | public ArthasAgent(Map<String, String> configMap, String arthasHome, b...
    method attach (line 60) | public static void attach() {
    method attach (line 68) | public static void attach(Map<String, String> configMap) {
    method attach (line 76) | public static void attach(String arthasHome) {
    method init (line 80) | public void init() throws IllegalStateException {
    method createTempDir (line 139) | private static File createTempDir() {
    method getErrorMessage (line 153) | public String getErrorMessage() {
    method setErrorMessage (line 157) | public void setErrorMessage(String errorMessage) {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
  class ApplicationDispatcherBootstrap (line 74) | @Internal
    method ApplicationDispatcherBootstrap (line 95) | public ApplicationDispatcherBootstrap(
    method stop (line 113) | @Override
    method getApplicationExecutionFuture (line 124) | @VisibleForTesting
    method getApplicationCompletionFuture (line 129) | @VisibleForTesting
    method getClusterShutdownFuture (line 134) | @VisibleForTesting
    method runApplicationAndShutdownClusterAsync (line 143) | private CompletableFuture<Acknowledge> runApplicationAndShutdownCluste...
    method fixJobIdAndRunApplicationAsync (line 178) | private CompletableFuture<Void> fixJobIdAndRunApplicationAsync(
    method runApplicationAsync (line 211) | private CompletableFuture<Void> runApplicationAsync(
    method runApplicationEntryPoint (line 240) | private void runApplicationEntryPoint(
    method getApplicationResult (line 275) | private CompletableFuture<Void> getApplicationResult(
    method getJobResult (line 289) | private CompletableFuture<JobResult> getJobResult(
    method unwrapJobResultException (line 308) | private CompletableFuture<JobResult> unwrapJobResultException(

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/configuration/GlobalConfiguration.java
  class GlobalConfiguration (line 45) | @Internal
    method getSettings (line 75) | public static Map<String, String> getSettings() {
    method getRestPort (line 82) | public static int getRestPort() {
    method getRestPortAndClose (line 89) | public static int getRestPortAndClose() {
    method GlobalConfiguration (line 104) | private GlobalConfiguration() {
    method loadConfiguration (line 117) | public static Configuration loadConfiguration() {
    method loadConfiguration (line 128) | public static Configuration loadConfiguration(Configuration dynamicPro...
    method loadConfiguration (line 144) | public static Configuration loadConfiguration(final String configDir) {
    method loadConfiguration (line 157) | public static Configuration loadConfiguration(final String configDir, ...
    method loadYAMLResource (line 208) | private static Configuration loadYAMLResource(File file) {
    method fireBootstrap (line 269) | private static void fireBootstrap(Configuration config) {
    method getRunMode (line 279) | public static String getRunMode() {
    method loadTaskConfiguration (line 286) | private static void loadTaskConfiguration(Configuration config) {
    method isSensitive (line 335) | public static boolean isSensitive(String key) {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/contrib/streaming/state/RocksDBStateBackend.java
  class RocksDBStateBackend (line 77) | public class RocksDBStateBackend extends AbstractManagedMemoryStateBackend
    type PriorityQueueStateType (line 83) | public enum PriorityQueueStateType {
    method initZKClient (line 226) | private void initZKClient() {
    method isRoundRobin (line 260) | private boolean isRoundRobin() {
    method RocksDBStateBackend (line 281) | public RocksDBStateBackend(String checkpointDataUri) throws IOException {
    method RocksDBStateBackend (line 299) | public RocksDBStateBackend(String checkpointDataUri, boolean enableInc...
    method RocksDBStateBackend (line 317) | public RocksDBStateBackend(URI checkpointDataUri) throws IOException {
    method RocksDBStateBackend (line 335) | public RocksDBStateBackend(URI checkpointDataUri, boolean enableIncrem...
    method RocksDBStateBackend (line 350) | public RocksDBStateBackend(StateBackend checkpointStreamBackend) {
    method RocksDBStateBackend (line 365) | public RocksDBStateBackend(
    method RocksDBStateBackend (line 382) | @Deprecated
    method RocksDBStateBackend (line 390) | @Deprecated
    method RocksDBStateBackend (line 403) | private RocksDBStateBackend(
    method configure (line 500) | @Override
    method getCheckpointBackend (line 515) | public StateBackend getCheckpointBackend() {
    method lazyInitializeForJob (line 519) | private void lazyInitializeForJob(
    method getNextStoragePath (line 573) | private File getNextStoragePath() {
    method resolveCheckpoint (line 606) | @Override
    method createCheckpointStorage (line 611) | @Override
    method createKeyedStateBackend (line 620) | @Override
    method createKeyedStateBackend (line 649) | @Override
    method createOperatorStateBackend (line 727) | @Override
    method configureOptionsFactory (line 747) | private RocksDBOptionsFactory configureOptionsFactory(
    method getMemoryConfiguration (line 812) | public RocksDBMemoryConfiguration getMemoryConfiguration() {
    method setDbStoragePath (line 826) | public void setDbStoragePath(String path) {
    method setDbStoragePaths (line 847) | public void setDbStoragePaths(String... paths) {
    method getDbStoragePaths (line 902) | public String[] getDbStoragePaths() {
    method isIncrementalCheckpointsEnabled (line 917) | public boolean isIncrementalCheckpointsEnabled() {
    method getPriorityQueueStateType (line 928) | public PriorityQueueStateType getPriorityQueueStateType() {
    method setPriorityQueueStateType (line 938) | public void setPriorityQueueStateType(PriorityQueueStateType priorityQ...
    method setPredefinedOptions (line 956) | public void setPredefinedOptions(@Nonnull PredefinedOptions options) {
    method getPredefinedOptions (line 972) | @VisibleForTesting
    method setRocksDBOptions (line 991) | public void setRocksDBOptions(RocksDBOptionsFactory optionsFactory) {
    method getRocksDBOptions (line 1003) | @Nullable
    method getNumberOfTransferThreads (line 1011) | public int getNumberOfTransferThreads() {
    method setNumberOfTransferThreads (line 1023) | public void setNumberOfTransferThreads(int numberOfTransferThreads) {
    method getNumberOfTransferingThreads (line 1033) | @Deprecated
    method setNumberOfTransferingThreads (line 1041) | @Deprecated
    method getWriteBatchSize (line 1049) | public long getWriteBatchSize() {
    method setWriteBatchSize (line 1061) | public void setWriteBatchSize(long writeBatchSize) {
    method createOptionsAndResourceContainer (line 1070) | @VisibleForTesting
    method createOptionsAndResourceContainer (line 1075) | @VisibleForTesting
    method toString (line 1085) | @Override
    method ensureRocksDBIsLoaded (line 1105) | @VisibleForTesting
    method resetRocksDBLoadedFlag (line 1174) | @VisibleForTesting

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/contrib/streaming/state/restore/RocksDBFullRestoreOperation.java
  class RocksDBFullRestoreOperation (line 65) | public class RocksDBFullRestoreOperation<K> extends AbstractRocksDBResto...
    method RocksDBFullRestoreOperation (line 88) | public RocksDBFullRestoreOperation(
    method restore (line 130) | @Override
    method restoreKeyGroupsInStateHandle (line 150) | private void restoreKeyGroupsInStateHandle()
    method restoreKVStateMetaData (line 178) | private void restoreKVStateMetaData() throws IOException, StateMigrati...
    method restoreKVStateData (line 227) | private void restoreKVStateData() throws IOException, RocksDBException {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
  class CheckpointCoordinator (line 66) | public class CheckpointCoordinator {
    method getBaseInterval (line 144) | public long getBaseInterval() {
    method setBaseInterval (line 148) | public void setBaseInterval(long baseInterval) {
    method setCheckpointTimeout (line 152) | public void setCheckpointTimeout(long checkpointTimeout) {
    method getMinPauseBetweenCheckpoints (line 156) | public long getMinPauseBetweenCheckpoints() {
    method setMinPauseBetweenCheckpoints (line 160) | public void setMinPauseBetweenCheckpoints(long minPauseBetweenCheckpoi...
    method getInstance (line 166) | public static CheckpointCoordinator getInstance() {
    method CheckpointCoordinator (line 232) | public CheckpointCoordinator(
    method CheckpointCoordinator (line 266) | @VisibleForTesting
    method addMasterHook (line 373) | public boolean addMasterHook(MasterTriggerRestoreHook<?> hook) {
    method getNumberOfRegisteredMasterHooks (line 390) | public int getNumberOfRegisteredMasterHooks() {
    method setCheckpointStatsTracker (line 401) | public void setCheckpointStatsTracker(@Nullable CheckpointStatsTracker...
    method shutdown (line 415) | public void shutdown(JobStatus jobStatus) throws Exception {
    method isShutdown (line 444) | public boolean isShutdown() {
    method triggerSavepoint (line 461) | public CompletableFuture<CompletedCheckpoint> triggerSavepoint(
    method triggerSynchronousSavepoint (line 478) | public CompletableFuture<CompletedCheckpoint> triggerSynchronousSavepo...
    method triggerSavepointInternal (line 487) | private CompletableFuture<CompletedCheckpoint> triggerSavepointInternal(
    method triggerCheckpoint (line 519) | public CompletableFuture<CompletedCheckpoint> triggerCheckpoint(boolea...
    method triggerCheckpoint (line 523) | @VisibleForTesting
    method startTriggeringCheckpoint (line 542) | private void startTriggeringCheckpoint(CheckpointTriggerRequest reques...
    method initializeCheckpoint (line 680) | private CompletableFuture<CheckpointIdAndStorageLocation> initializeCh...
    method createPendingCheckpoint (line 707) | private PendingCheckpoint createPendingCheckpoint(
    method snapshotMasterState (line 775) | private CompletableFuture<Void> snapshotMasterState(PendingCheckpoint ...
    method snapshotTaskState (line 830) | private void snapshotTaskState(
    method onTriggerSuccess (line 856) | private void onTriggerSuccess() {
    method onTriggerFailure (line 869) | private void onTriggerFailure(
    method onTriggerFailure (line 885) | private void onTriggerFailure(@Nullable PendingCheckpoint checkpoint, ...
    method executeQueuedRequest (line 914) | private void executeQueuedRequest() {
    method chooseQueuedRequestToExecute (line 918) | private Optional<CheckpointTriggerRequest> chooseQueuedRequestToExecut...
    method chooseRequestToExecute (line 925) | private Optional<CheckpointTriggerRequest> chooseRequestToExecute(
    method maybeCompleteCheckpoint (line 934) | private boolean maybeCompleteCheckpoint(PendingCheckpoint checkpoint) {
    method receiveDeclineMessage (line 963) | public void receiveDeclineMessage(DeclineCheckpoint message, String ta...
    method receiveAcknowledgeMessage (line 1044) | public boolean receiveAcknowledgeMessage(
    method completePendingCheckpoint (line 1180) | private void completePendingCheckpoint(PendingCheckpoint pendingCheckp...
    method scheduleTriggerRequest (line 1278) | void scheduleTriggerRequest() {
    method sendAcknowledgeMessages (line 1289) | private void sendAcknowledgeMessages(long checkpointId, long timestamp) {
    method sendAbortedMessages (line 1304) | private void sendAbortedMessages(long checkpointId, long timeStamp) {
    method failUnacknowledgedPendingCheckpointsFor (line 1330) | public void failUnacknowledgedPendingCheckpointsFor(
    method rememberRecentCheckpointId (line 1339) | private void rememberRecentCheckpointId(long id) {
    method dropSubsumedCheckpoints (line 1346) | private void dropSubsumedCheckpoints(long checkpointId) {
    method restoreLatestCheckpointedState (line 1377) | @Deprecated
    method restoreLatestCheckpointedStateToSubtasks (line 1414) | public OptionalLong restoreLatestCheckpointedStateToSubtasks(
    method restoreLatestCheckpointedStateToAll (line 1451) | public boolean restoreLatestCheckpointedStateToAll(
    method restoreInitialCheckpointIfPresent (line 1476) | public boolean restoreInitialCheckpointIfPresent(final Set<ExecutionJo...
    method restoreLatestCheckpointedStateInternal (line 1495) | private OptionalLong restoreLatestCheckpointedStateInternal(
    method restoreSavepoint (line 1612) | public boolean restoreSavepoint(
    method getNumberOfPendingCheckpoints (line 1658) | public int getNumberOfPendingCheckpoints() {
    method getNumberOfRetainedSuccessfulCheckpoints (line 1664) | public int getNumberOfRetainedSuccessfulCheckpoints() {
    method getPendingCheckpoints (line 1670) | public Map<Long, PendingCheckpoint> getPendingCheckpoints() {
    method getSuccessfulCheckpoints (line 1676) | public List<CompletedCheckpoint> getSuccessfulCheckpoints() throws Exc...
    method getCheckpointStorage (line 1682) | public CheckpointStorageCoordinatorView getCheckpointStorage() {
    method getCheckpointStore (line 1686) | public CompletedCheckpointStore getCheckpointStore() {
    method getCheckpointTimeout (line 1690) | public long getCheckpointTimeout() {
    method getTriggerRequestQueue (line 1695) | @Deprecated
    method isTriggering (line 1703) | public boolean isTriggering() {
    method isCurrentPeriodicTriggerAvailable (line 1707) | @VisibleForTesting
    method isPeriodicCheckpointingConfigured (line 1717) | public boolean isPeriodicCheckpointingConfigured() {
    method startCheckpointScheduler (line 1725) | public void startCheckpointScheduler() {
    method stopCheckpointScheduler (line 1739) | public void stopCheckpointScheduler() {
    method abortPendingCheckpoints (line 1758) | public void abortPendingCheckpoints(CheckpointException exception) {
    method abortPendingCheckpoints (line 1764) | private void abortPendingCheckpoints(
    method rescheduleTrigger (line 1781) | private void rescheduleTrigger(long tillNextMillis) {
    method cancelPeriodicTrigger (line 1786) | private void cancelPeriodicTrigger() {
    method getRandomInitDelay (line 1793) | private long getRandomInitDelay() {
    method scheduleTriggerWithDelay (line 1797) | private ScheduledFuture<?> scheduleTriggerWithDelay(long initDelay) {
    method restoreStateToCoordinators (line 1802) | private void restoreStateToCoordinators(
    method createActivatorDeactivator (line 1819) | public JobStatusListener createActivatorDeactivator() {
    method getNumQueuedRequests (line 1833) | int getNumQueuedRequests() {
    class ScheduledTrigger (line 1841) | private final class ScheduledTrigger implements Runnable {
      method run (line 1843) | @Override
    method discardSubtaskState (line 1862) | private void discardSubtaskState(
    method abortPendingCheckpoint (line 1890) | private void abortPendingCheckpoint(
    method abortPendingCheckpoint (line 1896) | private void abortPendingCheckpoint(
    method preCheckGlobalState (line 1934) | private void preCheckGlobalState(boolean isPeriodic) throws Checkpoint...
    method getTriggerExecutions (line 1952) | private Execution[] getTriggerExecutions() throws CheckpointException {
    method getAckTasks (line 1986) | private Map<ExecutionAttemptID, ExecutionVertex> getAckTasks() throws ...
    method abortPendingAndQueuedCheckpoints (line 2005) | private void abortPendingAndQueuedCheckpoints(CheckpointException exce...
    class CheckpointCanceller (line 2015) | private class CheckpointCanceller implements Runnable {
      method CheckpointCanceller (line 2019) | private CheckpointCanceller(PendingCheckpoint pendingCheckpoint) {
      method run (line 2023) | @Override
    method getCheckpointException (line 2042) | private static CheckpointException getCheckpointException(
    class CheckpointIdAndStorageLocation (line 2051) | private static class CheckpointIdAndStorageLocation {
      method CheckpointIdAndStorageLocation (line 2055) | CheckpointIdAndStorageLocation(
    class CheckpointTriggerRequest (line 2063) | static class CheckpointTriggerRequest {
      method CheckpointTriggerRequest (line 2071) | CheckpointTriggerRequest(
      method getOnCompletionFuture (line 2082) | CompletableFuture<CompletedCheckpoint> getOnCompletionFuture() {
      method completeExceptionally (line 2086) | public void completeExceptionally(CheckpointException exception) {
      method isForce (line 2090) | public boolean isForce() {
    type OperatorCoordinatorRestoreBehavior (line 2095) | private enum OperatorCoordinatorRestoreBehavior {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/runtime/util/EnvironmentInformation.java
  class EnvironmentInformation (line 44) | public class EnvironmentInformation {
    method isJobManager (line 58) | public static boolean isJobManager() {
    method getSettings (line 65) | public static Map<String, String> getSettings() {
    method setSetting (line 72) | public static void setSetting(String key, String value) {
    method getVersion (line 85) | public static String getVersion() {
    method getScalaVersion (line 94) | public static String getScalaVersion() {
    method getBuildTime (line 99) | public static Instant getBuildTime() {
    method getBuildTimeString (line 107) | public static String getBuildTimeString() {
    method getGitCommitId (line 112) | public static String getGitCommitId() {
    method getGitCommitIdAbbrev (line 117) | public static String getGitCommitIdAbbrev() {
    method getGitCommitTime (line 122) | public static Instant getGitCommitTime() {
    method getGitCommitTimeString (line 130) | public static String getGitCommitTimeString() {
    method getRevisionInformation (line 140) | public static RevisionInformation getRevisionInformation() {
    class Versions (line 144) | private static final class Versions {
      method getProperty (line 165) | private String getProperty(Properties properties, String key, String...
      method Versions (line 173) | public Versions() {
    class VersionsHolder (line 222) | private static final class VersionsHolder {
    method getVersionsInstance (line 226) | private static Versions getVersionsInstance() {
    method getHadoopUser (line 235) | public static String getHadoopUser() {
    method getMaxJvmHeapMemory (line 271) | public static long getMaxJvmHeapMemory() {
    method getSizeOfFreeHeapMemoryWithDefrag (line 299) | public static long getSizeOfFreeHeapMemoryWithDefrag() {
    method getSizeOfFreeHeapMemory (line 313) | public static long getSizeOfFreeHeapMemory() {
    method getJvmVersion (line 323) | public static String getJvmVersion() {
    method getJvmStartupOptions (line 343) | public static String getJvmStartupOptions() {
    method getJvmStartupOptionsArray (line 363) | public static String[] getJvmStartupOptionsArray() {
    method getTemporaryFileDirectory (line 379) | public static String getTemporaryFileDirectory() {
    method getOpenFileHandlesLimit (line 392) | public static long getOpenFileHandlesLimit() {
    method parseCommand (line 418) | private static void parseCommand(String[] commandLineArgs) {
    method logEnvironmentInfo (line 445) | public static void logEnvironmentInfo(
    method getHadoopVersionString (line 532) | public static String getHadoopVersionString() {
    method EnvironmentInformation (line 552) | private EnvironmentInformation() {}
    class RevisionInformation (line 560) | public static class RevisionInformation {
      method RevisionInformation (line 568) | public RevisionInformation(String commitId, String commitDate) {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
  class TableEnvironmentImpl (line 71) | @Internal
    method createTemporaryTable (line 105) | @Override
    method TableEnvironmentImpl (line 114) | protected TableEnvironmentImpl(
    method create (line 163) | public static TableEnvironmentImpl create(EnvironmentSettings settings) {
    method fromValues (line 212) | @Override
    method fromValues (line 217) | @Override
    method fromValues (line 222) | @Override
    method fromValues (line 227) | @Override
    method fromValues (line 234) | @Override
    method fromValues (line 243) | @Override
    method getPlanner (line 252) | @VisibleForTesting
    method fromTableSource (line 257) | @Override
    method registerCatalog (line 265) | @Override
    method getCatalog (line 270) | @Override
    method loadModule (line 275) | @Override
    method unloadModule (line 280) | @Override
    method registerFunction (line 285) | @Override
    method createTemporarySystemFunction (line 290) | @Override
    method createTemporarySystemFunction (line 298) | @Override
    method dropTemporarySystemFunction (line 303) | @Override
    method createFunction (line 308) | @Override
    method createFunction (line 313) | @Override
    method dropFunction (line 323) | @Override
    method createTemporaryFunction (line 329) | @Override
    method createTemporaryFunction (line 337) | @Override
    method dropTemporaryFunction (line 344) | @Override
    method registerTable (line 350) | @Override
    method createTemporaryView (line 356) | @Override
    method createTemporaryView (line 362) | private void createTemporaryView(UnresolvedIdentifier identifier, Tabl...
    method scan (line 376) | @Override
    method from (line 388) | @Override
    method insertInto (line 400) | @Override
    method insertInto (line 406) | @Override
    method insertIntoInternal (line 415) | private void insertIntoInternal(UnresolvedIdentifier unresolvedIdentif...
    method scanInternal (line 426) | private Optional<CatalogQueryOperation> scanInternal(UnresolvedIdentif...
    method connect (line 434) | @Override
    method listCatalogs (line 439) | @Override
    method listModules (line 444) | @Override
    method listDatabases (line 449) | @Override
    method listTables (line 458) | @Override
    method listViews (line 463) | @Override
    method listTemporaryTables (line 468) | @Override
    method listTemporaryViews (line 473) | @Override
    method dropTemporaryTable (line 478) | @Override
    method dropTemporaryView (line 490) | @Override
    method listUserDefinedFunctions (line 502) | @Override
    method listFunctions (line 507) | @Override
    method explain (line 512) | @Override
    method explain (line 517) | @Override
    method explain (line 523) | @Override
    method explainSql (line 532) | @Override
    method explainInternal (line 544) | @Override
    method getCompletionHints (line 549) | @Override
    method sqlQuery (line 554) | @Override
    method executeSql (line 579) | @Override
    method createStatementSet (line 611) | @Override
    method executeInternal (line 616) | @Override
    method executeInternal (line 645) | @Override
    method sqlUpdate (line 673) | @Override
    method executeOperation (line 708) | private TableResult executeOperation(Operation operation) {
    method createCatalog (line 1056) | private TableResult createCatalog(CreateCatalogOperation operation) {
    method buildShowResult (line 1072) | private TableResult buildShowResult(String columnName, String[] object...
    method buildDescribeResult (line 1079) | private TableResult buildDescribeResult(TableSchema schema) {
    method buildResult (line 1130) | private TableResult buildResult(String[] headers, DataType[] types, Ob...
    method extractSinkIdentifierNames (line 1146) | private List<String> extractSinkIdentifierNames(List<ModifyOperation> ...
    method getJobName (line 1175) | private String getJobName(String defaultJobName) {
    method getCatalogOrThrowException (line 1180) | private Catalog getCatalogOrThrowException(String catalogName) {
    method getDDLOpExecuteErrorMsg (line 1188) | private String getDDLOpExecuteErrorMsg(String action) {
    method getCurrentCatalog (line 1192) | @Override
    method useCatalog (line 1197) | @Override
    method getCurrentDatabase (line 1202) | @Override
    method useDatabase (line 1207) | @Override
    method getConfig (line 1212) | @Override
    method execute (line 1217) | @Override
    method getParser (line 1223) | @Override
    method getCatalogManager (line 1228) | @Override
    method qualifyQueryOperation (line 1240) | protected QueryOperation qualifyQueryOperation(
    method validateTableSource (line 1250) | protected void validateTableSource(TableSource<?> tableSource) {
    method translateAndClearBuffer (line 1261) | protected List<Transformation<?>> translateAndClearBuffer() {
    method translate (line 1271) | private List<Transformation<?>> translate(List<ModifyOperation> modify...
    method buffer (line 1275) | private void buffer(List<ModifyOperation> modifyOperations) {
    method getExplainDetails (line 1279) | @VisibleForTesting
    method registerTableSourceInternal (line 1294) | @Override
    method registerTableSinkInternal (line 1333) | @Override
    method getTemporaryTable (line 1370) | private Optional<CatalogBaseTable> getTemporaryTable(ObjectIdentifier ...
    method createCatalogFunction (line 1377) | private TableResult createCatalogFunction(
    method alterCatalogFunction (line 1408) | private TableResult alterCatalogFunction(
    method dropCatalogFunction (line 1436) | private TableResult dropCatalogFunction(
    method createSystemFunction (line 1465) | private TableResult createSystemFunction(CreateTempSystemFunctionOpera...
    method dropSystemFunction (line 1481) | private TableResult dropSystemFunction(DropTempSystemFunctionOperation...
    method createTable (line 1493) | protected TableImpl createTable(QueryOperation tableOperation) {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/apache/flink/util/ExceptionUtils.java
  class ExceptionUtils (line 47) | @Internal
    method stringifyException (line 64) | public static String stringifyException(final Throwable e) {
    method stringifyException (line 77) | public static String stringifyException(final Throwable e, String sql) {
    method isJvmFatalError (line 110) | public static boolean isJvmFatalError(Throwable t) {
    method isJvmFatalOrOutOfMemoryError (line 128) | public static boolean isJvmFatalOrOutOfMemoryError(Throwable t) {
    method tryEnrichOutOfMemoryError (line 146) | public static void tryEnrichOutOfMemoryError(
    method updateDetailMessage (line 177) | public static void updateDetailMessage(
    method updateDetailMessageOfThrowable (line 194) | private static void updateDetailMessageOfThrowable(
    method isMetaspaceOutOfMemoryError (line 221) | public static boolean isMetaspaceOutOfMemoryError(@Nullable Throwable ...
    method isDirectOutOfMemoryError (line 231) | public static boolean isDirectOutOfMemoryError(@Nullable Throwable t) {
    method isHeapSpaceOutOfMemoryError (line 235) | public static boolean isHeapSpaceOutOfMemoryError(@Nullable Throwable ...
    method isOutOfMemoryErrorWithMessageStartingWith (line 239) | private static boolean isOutOfMemoryErrorWithMessageStartingWith(
    method isOutOfMemoryError (line 247) | private static boolean isOutOfMemoryError(@Nullable Throwable t) {
    method rethrowIfFatalError (line 257) | public static void rethrowIfFatalError(Throwable t) {
    method rethrowIfFatalErrorOrOOM (line 270) | public static void rethrowIfFatalErrorOrOOM(Throwable t) {
    method firstOrSuppressed (line 310) | public static <T extends Throwable> T firstOrSuppressed(T newException...
    method rethrow (line 328) | public static void rethrow(Throwable t) {
    method rethrow (line 346) | public static void rethrow(Throwable t, String parentMessage) {
    method rethrowException (line 364) | public static void rethrowException(Throwable t, String parentMessage)...
    method rethrowException (line 381) | public static void rethrowException(Throwable t) throws Exception {
    method tryRethrowException (line 397) | public static void tryRethrowException(@Nullable Exception e) throws E...
    method tryRethrowIOException (line 410) | public static void tryRethrowIOException(Throwable t) throws IOExcepti...
    method rethrowIOException (line 429) | public static void rethrowIOException(Throwable t) throws IOException {
    method findSerializedThrowable (line 451) | public static <T extends Throwable> Optional<T> findSerializedThrowable(
    method findThrowable (line 484) | public static <T extends Throwable> Optional<T> findThrowable(
    method findThrowableSerializedAware (line 520) | public static <T extends Throwable> Optional<T> findThrowableSerialize...
    method findThrowable (line 548) | public static Optional<Throwable> findThrowable(
    method findThrowableWithMessage (line 574) | public static Optional<Throwable> findThrowableWithMessage(
    method stripExecutionException (line 599) | public static Throwable stripExecutionException(Throwable throwable) {
    method stripCompletionException (line 610) | public static Throwable stripCompletionException(Throwable throwable) {
    method stripException (line 622) | public static Throwable stripException(
    method tryDeserializeAndThrow (line 640) | public static void tryDeserializeAndThrow(Throwable throwable, ClassLo...
    method checkInterrupted (line 661) | public static void checkInterrupted(Throwable e) {
    method suppressExceptions (line 671) | public static void suppressExceptions(RunnableWithException action) {
    method ExceptionUtils (line 687) | private ExceptionUtils() {}

FILE: fire-enhance/apache-flink/src/main/java-flink-1.12/org/rocksdb/RocksDB.java
  class RocksDB (line 24) | public class RocksDB extends RocksObject {
    method elapsed (line 38) | protected void elapsed(long start) {
    type LibraryState (line 55) | private enum LibraryState {
    method loadLibrary (line 75) | public static void loadLibrary() {
    method loadLibrary (line 121) | public static void loadLibrary(final List<String> paths) {
    method open (line 185) | public static RocksDB open(final String path) throws RocksDBException {
    method open (line 222) | public static RocksDB open(final String path,
    method open (line 256) | public static RocksDB open(final Options options, final String path)
    method open (line 302) | public static RocksDB open(final DBOptions options, final String path,
    method openReadOnly (line 340) | public static RocksDB openReadOnly(final String path)
    method openReadOnly (line 363) | public static RocksDB openReadOnly(final String path,
    method openReadOnly (line 391) | public static RocksDB openReadOnly(final Options options, final String...
    method openReadOnly (line 423) | public static RocksDB openReadOnly(final DBOptions options, final Stri...
    method listColumnFamilies (line 462) | public static List<byte[]> listColumnFamilies(final Options options,
    method storeOptionsInstance (line 468) | protected void storeOptionsInstance(DBOptionsInterface options) {
    method put (line 481) | public void put(final byte[] key, final byte[] value)
    method put (line 500) | public void put(final ColumnFamilyHandle columnFamilyHandle,
    method put (line 516) | public void put(final WriteOptions writeOpts, final byte[] key,
    method put (line 538) | public void put(final ColumnFamilyHandle columnFamilyHandle,
    method keyMayExist (line 557) | public boolean keyMayExist(final byte[] key, final StringBuilder value) {
    method keyMayExist (line 574) | public boolean keyMayExist(final ColumnFamilyHandle columnFamilyHandle,
    method keyMayExist (line 593) | public boolean keyMayExist(final ReadOptions readOptions,
    method keyMayExist (line 613) | public boolean keyMayExist(final ReadOptions readOptions,
    method write (line 630) | public void write(final WriteOptions writeOpts, final WriteBatch updates)
    method write (line 644) | public void write(final WriteOptions writeOpts,
    method merge (line 659) | public void merge(final byte[] key, final byte[] value)
    method merge (line 675) | public void merge(final ColumnFamilyHandle columnFamilyHandle,
    method merge (line 692) | public void merge(final WriteOptions writeOpts, final byte[] key,
    method merge (line 710) | public void merge(final ColumnFamilyHandle columnFamilyHandle,
    method get (line 735) | public int get(final byte[] key, final byte[] value) throws RocksDBExc...
    method get (line 756) | public int get(final ColumnFamilyHandle columnFamilyHandle, final byte...
    method get (line 778) | public int get(final ReadOptions opt, final byte[] key,
    method get (line 801) | public int get(final ColumnFamilyHandle columnFamilyHandle,
    method get (line 820) | public byte[] get(final byte[] key) throws RocksDBException {
    method get (line 838) | public byte[] get(final ColumnFamilyHandle columnFamilyHandle,
    method get (line 862) | public byte[] get(final ReadOptions opt, final byte[] key)
    method get (line 882) | public byte[] get(final ColumnFamilyHandle columnFamilyHandle,
    method multiGet (line 898) | public Map<byte[], byte[]> multiGet(final List<byte[]> keys)
    method computeCapacityHint (line 925) | private static int computeCapacityHint(final int estimatedNumberOfItem...
    method multiGet (line 949) | public Map<byte[], byte[]> multiGet(
    method multiGet (line 997) | public Map<byte[], byte[]> multiGet(final ReadOptions opt,
    method multiGet (line 1043) | public Map<byte[], byte[]> multiGet(final ReadOptions opt,
    method remove (line 1092) | @Deprecated
    method delete (line 1107) | public void delete(final byte[] key) throws RocksDBException {
    method remove (line 1125) | @Deprecated
    method delete (line 1143) | public void delete(final ColumnFamilyHandle columnFamilyHandle,
    method remove (line 1161) | @Deprecated
    method delete (line 1178) | public void delete(final WriteOptions writeOpt, final byte[] key)
    method remove (line 1198) | @Deprecated
    method delete (line 1218) | public void delete(final ColumnFamilyHandle columnFamilyHandle,
    method singleDelete (line 1246) | @Experimental("Performance optimization for a very specific workload")
    method singleDelete (line 1273) | @Experimental("Performance optimization for a very specific workload")
    method singleDelete (line 1304) | @Experimental("Performance optimization for a very specific workload")
    method singleDelete (line 1335) | @Experimental("Performance optimization for a very specific workload")
    method getProperty (line 1369) | public String getProperty(final ColumnFamilyHandle columnFamilyHandle,
    method deleteRange (line 1392) | public void deleteRange(final byte[] beginKey, final byte[] endKey) th...
    method deleteRange (line 1415) | public void deleteRange(final ColumnFamilyHandle columnFamilyHandle, f...
    method deleteRange (line 1440) | public void deleteRange(final WriteOptions writeOpt, final byte[] begi...
    method deleteRange (line 1467) | public void deleteRange(final ColumnFamilyHandle columnFamilyHandle, f...
    method getProperty (line 1496) | public String getProperty(final String property) throws RocksDBExcepti...
    method getLongProperty (line 1522) | public long getLongProperty(final String property) throws RocksDBExcep...
    method getLongProperty (line 1550) | public long getLongProperty(final ColumnFamilyHandle columnFamilyHandle,
    method getAggregatedLongProperty (line 1577) | public long getAggregatedLongProperty(final String property) throws Ro...
    method newIterator (line 1593) | public RocksIterator newIterator() {
    method newIterator (line 1610) | public RocksIterator newIterator(final ReadOptions readOptions) {
    method getSnapshot (line 1626) | public Snapshot getSnapshot() {
    method releaseSnapshot (line 1640) | public void releaseSnapshot(final Snapshot snapshot) {
    method newIterator (line 1660) | public RocksIterator newIterator(
    method newIterator (line 1681) | public RocksIterator newIterator(final ColumnFamilyHandle columnFamily...
    method newIterators (line 1700) | public List<RocksIterator> newIterators(
    method newIterators (line 1720) | public List<RocksIterator> newIterators(
    method getDefaultColumnFamily (line 1745) | public ColumnFamilyHandle getDefaultColumnFamily() {
    method createColumnFamily (line 1763) | public ColumnFamilyHandle createColumnFamily(
    method dropColumnFamily (line 1782) | public void dropColumnFamily(final ColumnFamilyHandle columnFamilyHandle)
    method flush (line 1801) | public void flush(final FlushOptions flushOptions)
    method flush (line 1818) | public void flush(final FlushOptions flushOptions,
    method compactRange (line 1840) | public void compactRange() throws RocksDBException {
    method compactRange (line 1863) | public void compactRange(final byte[] begin, final byte[] end)
    method compactRange (line 1895) | @Deprecated
    method compactRange (line 1932) | @Deprecated
    method compactRange (line 1966) | public void compactRange(final ColumnFamilyHandle columnFamilyHandle)
    method compactRange (line 1998) | public void compactRange(final ColumnFamilyHandle columnFamilyHandle,
    method compactRange (line 2019) | public void compactRange(final ColumnFamilyHandle columnFamilyHandle,
    method compactRange (line 2058) | @Deprecated
    method compactRange (line 2100) | @Deprecated
    method pauseBackgroundWork (line 2117) | public void pauseBackgroundWork() throws RocksDBException {
    method continueBackgroundWork (line 2127) | public void continueBackgroundWork() throws RocksDBException {
    method getLatestSequenceNumber (line 2137) | public long getLatestSequenceNumber() {
    method disableFileDeletions (line 2149) | public void disableFileDeletions() throws RocksDBException {
    method enableFileDeletions (line 2173) | public void enableFileDeletions(final boolean force)
    method getUpdatesSince (line 2195) | public TransactionLogIterator getUpdatesSince(final long sequenceNumber)
    method setOptions (line 2201) | public void setOptions(final ColumnFamilyHandle columnFamilyHandle,
    method toNativeHandleList (line 2209) | private long[] toNativeHandleList(final List<? extends RocksObject> ob...
    method ingestExternalFile (line 2235) | public void ingestExternalFile(final List<String> filePathList,
    method ingestExternalFile (line 2261) | public void ingestExternalFile(final ColumnFamilyHandle columnFamilyHa...
    method destroyDB (line 2280) | public static void destroyDB(final String path, final Options options)
    method RocksDB (line 2290) | protected RocksDB(final long nativeHandle) {
    method open (line 2295) | protected native static long open(final long optionsHandle,
    method open (line 2310) | protected native static long[] open(final long optionsHandle,
    method openROnly (line 2314) | protected native static long openROnly(final long optionsHandle,
    method openROnly (line 2329) | protected native static long[] openROnly(final long optionsHandle,
    method listColumnFamilies (line 2334) | protected native static byte[][] listColumnFamilies(long optionsHandle,
    method put (line 2336) | protected native void put(long handle, byte[] key, int keyOffset,
    method put (line 2339) | protected native void put(long handle, byte[] key, int keyOffset,
    method put (line 2342) | protected native void put(long handle, long writeOptHandle, byte[] key,
    method put (line 2345) | protected native void put(long handle, long writeOptHandle, byte[] key,
    method write0 (line 2348) | protected native void write0(final long handle, long writeOptHandle,
    method write1 (line 2350) | protected native void write1(final long handle, long writeOptHandle,
    method keyMayExist (line 2352) | protected native boolean keyMayExist(final long handle, final byte[] key,
    method keyMayExist (line 2355) | protected native boolean keyMayExist(final long handle, final byte[] key,
    method keyMayExist (line 2358) | protected native boolean keyMayExist(final long handle,
    method keyMayExist (line 2361) | protected native boolean keyMayExist(final long handle,
    method merge (line 2365) | protected native void merge(long handle, byte[] key, int keyOffset,
    method merge (line 2368) | protected native void merge(long handle, byte[] key, int keyOffset,
    method merge (line 2371) | protected native void merge(long handle, long writeOptHandle, byte[] key,
    method merge (line 2374) | protected native void merge(long handle, long writeOptHandle, byte[] key,
    method get (line 2377) | protected native int get(long handle, byte[] key, int keyOffset,
    method get (line 2380) | protected native int get(long handle, byte[] key, int keyOffset,
    method get (line 2383) | protected native int get(long handle, long readOptHandle, byte[] key,
    method get (line 2386) | protected native int get(long handle, long readOptHandle, byte[] key,
    method multiGet (line 2389) | protected native byte[][] multiGet(final long dbHandle, final byte[][]...
    method multiGet (line 2391) | protected native byte[][] multiGet(final long dbHandle, final byte[][]...
    method multiGet (line 2394) | protected native byte[][] multiGet(final long dbHandle, final long rOp...
    method multiGet (line 2396) | protected native byte[][] multiGet(final long dbHandle, final long rOp...
    method get (line 2399) | protected native byte[] get(long handle, byte[] key, int keyOffset,
    method get (line 2401) | protected native byte[] get(long handle, byte[] key, int keyOffset,
    method get (line 2403) | protected native byte[] get(long handle, long readOptHandle,
    method get (line 2405) | protected native byte[] get(long handle, long readOptHandle, byte[] key,
    method delete (line 2407) | protected native void delete(long handle, byte[] key, int keyOffset,
    method delete (line 2409) | protected native void delete(long handle, byte[] key, int keyOffset,
    method delete (line 2411) | protected native void delete(long handle, long writeOptHandle, byte[] ...
    method delete (line 2413) | protected native void delete(long handle, long writeOptHandle, byte[] ...
    method singleDelete (line 2415) | protected native void singleDelete(
    method singleDelete (line 2417) | protected native void singleDelete(
    method singleDelete (line 2420) | protected native void singleDelete(
    method singleDelete (line 2423) | protected native void singleDelete(
    method deleteRange (line 2426) | protected native void deleteRange(long handle, byte[] beginKey, int be...
    method deleteRange (line 2429) | protected native void deleteRange(long handle, byte[] beginKey, int be...
    method deleteRange (line 2432) | protected native void deleteRange(long handle, long writeOptHandle, by...
    method deleteRange (line 2435) | protected native void deleteRange(long handle, long writeOptHandle, by...
    method getProperty0 (line 2438) | protected native String getProperty0(long nativeHandle,
    method getProperty0 (line 2440) | protected native String getProperty0(long nativeHandle, long cfHandle,
    method getLongProperty (line 2442) | protected native long getLongProperty(long nativeHandle, String property,
    method getLongProperty (line 2444) | protected native long getLongProperty(long nativeHandle, long cfHandle,
    method getAggregatedLongProperty (line 2446) | protected native long getAggregatedLongProperty(long nativeHandle, Str...
    method iterator (line 2448) | protected native long iterator(long handle);
    method iterator (line 2449) | protected native long iterator(long handle, long readOptHandle);
    method iteratorCF (line 2450) | protected native long iteratorCF(long handle, long cfHandle);
    method iteratorCF (line 2451) | protected native long iteratorCF(long handle, long cfHandle,
    method iterators (line 2453) | protected native long[] iterators(final long handle,
    method getSnapshot (line 2456) | protected native long getSnapshot(long nativeHandle);
    method releaseSnapshot (line 2457) | protected native void releaseSnapshot(
    method disposeInternal (line 2459) | @Override protected native void disposeInternal(final long handle);
    method getDefaultColumnFamily (line 2460) | private native long getDefaultColumnFamily(long handle);
    method createColumnFamily (line 2461) | private native long createColumnFamily(final long handle,
    method dropColumnFamily (line 2464) | private native void dropColumnFamily(long handle, long cfHandle)
    method flush (line 2466) | private native void flush(long handle, long flushOptHandle)
    method flush (line 2468) | private native void flush(long handle, long flushOptHandle, long cfHan...
    method compactRange0 (line 2470) | private native void compactRange0(long handle, boolean reduce_level,
    method compactRange0 (line 2472) | private native void compactRange0(long handle, byte[] begin, int begin...
    method compactRange (line 2475) | private native void compactRange(long handle, byte[] begin, int beginLen,
    method compactRange (line 2478) | private native void compactRange(long handle, boolean reduce_level,
    method compactRange (line 2481) | private native void compactRange(long handle, byte[] begin, int beginLen,
    method pauseBackgroundWork (line 2484) | private native void pauseBackgroundWork(long handle) throws RocksDBExc...
    method continueBackgroundWork (line 2485) | private native void continueBackgroundWork(long handle) throws RocksDB...
    method getLatestSequenceNumber (line 2486) | private native long getLatestSequenceNumber(long handle);
    method disableFileDeletions (line 2487) | private native void disableFileDeletions(long handle) throws RocksDBEx...
    method enableFileDeletions (line 2488) | private native void enableFileDeletions(long handle, boolean force)
    method getUpdatesSince (line 2490) | private native long getUpdatesSince(long handle, long sequenceNumber)
    method setOptions (line 2492) | private native void setOptions(long handle, long cfHandle, String[] keys,
    method ingestExternalFile (line 2494) | private native void ingestExternalFile(long handle, long cfHandle,
    method destroyDB (line 2497) | private native static void destroyDB(final String path,

FILE: fire-enhance/apache-flink/src/main/java-flink-1.13/org/apache/flink/client/deployment/application/ApplicationDispatcherBootstrap.java
  class ApplicationDispatcherBootstrap (line 74) | @Internal
    method ApplicationDispatcherBootstrap (line 95) | public ApplicationDispatcherBootstrap(
    method stop (line 113) | @Override
    method getApplicationExecutionFuture (line 124) | @VisibleForTesting
    method getApplicationCompletionFuture (line 129) | @VisibleForTesting
    method getClusterShutdownFuture (line 134) | @VisibleForTesting
    method runApplicationAndShutdownClusterAsync (line 143) | private CompletableFuture<Acknowledge> runApplicationAndShutdownCluste...
    method fixJobIdAndRunApplicationAsync (line 178) | private CompletableFuture<Void> fixJobIdAndRunApplicationAsync(
    method runApplicationAsync (line 211) | private CompletableFuture<Void> runApplicationAsync(
    method runApplicationEntryPoint (line 240) | private void runApplicationEntryPoint(
    method getApplicationResult (line 275) | private CompletableFuture<Void> getApplicationResult(
    method getJobResult (line 289) | private CompletableFuture<JobResult> getJobResult(
    method unwrapJobResultException (line 308) | private CompletableFuture<JobResult> unwrapJobResultException(

FILE: fire-enhance/apache-flink/src/main/java-flink-1.13/org/apache/flink/configuration/GlobalConfiguration.java
  class GlobalConfiguration (line 48) | @Internal
    method getSettings (line 82) | public static Map<String, String> getSettings() {
    method getRestPort (line 89) | public static int getRestPort() {
    method getRestPortAndClose (line 96) | public static int getRestPortAndClose() {
    method GlobalConfiguration (line 111) | private GlobalConfiguration() {}
    method loadConfiguration (line 123) | public static Configuration loadConfiguration() {
    method loadConfiguration (line 133) | public static Configuration loadConfiguration(Configuration dynamicPro...
    method loadConfiguration (line 149) | public static Configuration loadConfiguration(final String configDir) {
    method loadConfiguration (line 161) | public static Configuration loadConfiguration(
    method loadYAMLResource (line 222) | private static Configuration loadYAMLResource(File file) {
    method fireBootstrap (line 303) | private static void fireBootstrap(Configuration config) {
    method getRunMode (line 313) | public static String getRunMode() {
    method loadTaskConfiguration (line 320) | private static void loadTaskConfiguration(Configuration config) {
    method isSensitive (line 369) | public static boolean isSensitive(String key) {

FILE: fire-enhance/apache-flink/src/main/java-flink-1.13/org/apache/flink/contrib/streaming/state/EmbeddedRocksDBStateBackend.java
  class EmbeddedRocksDBStateBackend (line 99) | @PublicEvolving
    type PriorityQueueStateType (line 104) | public enum PriorityQueueStateType {
    method initZKClient (line 213) | private void initZKClient() {
    method isRoundRobin (line 245) | private boolean isRoundRobin() {
    method EmbeddedRocksDBStateBackend (line 252) | public EmbeddedRocksDBStateBackend() {
    method EmbeddedRocksDBStateBackend (line 261) | public EmbeddedRocksDBStateBackend(boolean enableIncrementalCheckpoint...
    method EmbeddedRocksDBStateBackend (line 270) | public EmbeddedRocksDBStateBackend(TernaryBoolean enableIncrementalChe...
    method EmbeddedRocksDBStateBackend (line 290) | private EmbeddedRocksDBStateBackend(
    method configure (line 378) | @Override
    method lazyInitializeForJob (line 387) | private void lazyInitializeForJob(
    method getNextStoragePath (line 440) | private File getNextStoragePath() {
    method createKeyedStateBackend (line 473) | @Override
    method createKeyedStateBackend (line 502) | @Override
    method createOperatorStateBackend (line 584) | @Override
    method configureOptionsFactory (line 604) | private RocksDBOptionsFactory configureOptionsFactory(
    method getMemoryConfiguration (line 668) | public RocksDBMemoryConfiguration getMemoryConfiguration() {
    method setDbStoragePath (line 682) | public void setDbStoragePath(String path) {
    method setDbStoragePaths (line 703) | public void setDbStoragePaths(String... paths) {
    method getDbStoragePaths (line 758) | public String[] getDbStoragePaths() {
    method isIncrementalCheckpointsEnabled (line 771) | public boolean isIncrementalCheckpointsEnabled() {
    method getPriorityQueueStateType (line 782) | public EmbeddedRocksDBStateBackend.PriorityQueueStateType getPriorityQ...
    method setPriorityQueueStateType (line 792) | public void setPriorityQueueStateType(
    method setPredefinedOptions (line 811) | public void setPredefinedOptions(@Nonnull PredefinedOptions options) {
    method getPredefinedOptions (line 827) | @VisibleForTesting
    method setRocksDBOptions (line 846) | public void setRocksDBOptions(RocksDBOptionsFactory optionsFactory) {
    method getRocksDBOptions (line 858) | @Nullable
    method getNumberOfTransferThreads (line 864) | public int getNumberOfTransferThreads() {
    method setNumberOfTransferThreads (line 876) | public void setNumberOfTransferThreads(int numberOfTransferThreads) {
    method getWriteBatchSize (line 884) | public long getWriteBatchSize() {
    method setWriteBatchSize (line 896) | public void setWriteBatchSize(long writeBatchSize) {
    method createOptionsAndResourceContainer (line 905) | @VisibleForTesting
    method createOptionsAndResourceContainer (line 910) | @VisibleForTesting
    method toString (line 920) | @Override
    method ensureRocksDBIsLoaded (line 938) | @VisibleForTesting
    method resetRocksDBLoadedFlag (line 1007) | @VisibleForTesting

FILE: fire-enhance/apache-flink/src/main/java-flink-1.13/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
  class CheckpointCoordinator (line 95) | public class CheckpointCoordinator {
    method getBaseInterval (line 163) | public long getBaseInterval() {
    method setBaseInterval (line 167) | public void setBaseInterval(long baseInterval) {
    method setCheckpointTimeout (line 171) | public void setCheckpointTimeout(long checkpointTimeout) {
    method getMinPauseBetweenCheckpoints (line 175) | public long getMinPauseBetweenCheckpoints() {
    method setMinPauseBetweenCheckpoints (line 179) | public void setMinPauseBetweenCheckpoints(long minPauseBetweenCheckpoi...
    method getInstance (line 185) | public static CheckpointCoordinator getInstance() {
    method CheckpointCoordinator (line 255) | public CheckpointCoordinator(
    method CheckpointCoordinator (line 287) | @VisibleForTesting
    method addMasterHook (line 392) | public boolean addMasterHook(MasterTriggerRestoreHook<?> hook) {
    method getNumberOfRegisteredMasterHooks (line 409) | public int getNumberOfRegisteredMasterHooks() {
    method setCheckpointStatsTracker (line 420) | public void setCheckpointStatsTracker(@Nullable CheckpointStatsTracker...
    method shutdown (line 434) | public void shutdown() throws Exception {
    method isShutdown (line 455) | public boolean isShutdown() {
    method triggerSavepoint (line 472) | public CompletableFuture<CompletedCheckpoint> triggerSavepoint(
    method triggerSynchronousSavepoint (line 489) | public CompletableFuture<CompletedCheckpoint> triggerSynchronousSavepo...
    method triggerSavepointInternal (line 498) | private CompletableFuture<CompletedCheckpoint> triggerSavepointInternal(
    method triggerCheckpoint (line 530) | public CompletableFuture<CompletedCheckpoint> triggerCheckpoint(boolea...
    method triggerCheckpoint (line 534) | @VisibleForTesting
    method startTriggeringCheckpoint (line 553) | private void startTriggeringCheckpoint(CheckpointTriggerRequest reques...
    method initializeCheckpoint (line 708) | private CheckpointIdAndStorageLocation initializeCheckpoint(
    method createPendingCheckpoint (line 726) | private PendingCheckpoint createPendingCheckpoint(
    method snapshotMasterState (line 789) | private CompletableFuture<Void> snapshotMasterState(PendingCheckpoint ...
    method snapshotTaskState (line 842) | private void snapshotTaskState(
    method onTriggerSuccess (line 868) | private void onTriggerSuccess() {
    method onTriggerFailure (line 881) | private void onTriggerFailure(
    method onTriggerFailure (line 897) | private void onTriggerFailure(@Nullable PendingCheckpoint checkpoint, ...
    method executeQueuedRequest (line 931) | private void executeQueuedRequest() {
    method chooseQueuedRequestToExecute (line 935) | private Optional<CheckpointTriggerRequest> chooseQueuedRequestToExecut...
    method chooseRequestToExecute (line 942) | private Optional<CheckpointTriggerRequest> chooseRequestToExecute(
    method maybeCompleteCheckpoint (line 951) | private boolean maybeCompleteCheckpoint(PendingCheckpoint checkpoint) {
    method receiveDeclineMessage (line 980) | public void receiveDeclineMessage(DeclineCheckpoint message, String ta...
    method receiveAcknowledgeMessage (line 1061) | public boolean receiveAcknowledgeMessage(
    method completePendingCheckpoint (line 1202) | private void completePendingCheckpoint(PendingCheckpoint pendingCheckp...
    method scheduleTriggerRequest (line 1310) | void scheduleTriggerRequest() {
    method sendAcknowledgeMessages (line 1321) | private void sendAcknowledgeMessages(
    method sendAbortedMessages (line 1337) | private void sendAbortedMessages(
    method failUnacknowledgedPendingCheckpointsFor (line 1364) | public void failUnacknowledgedPendingCheckpointsFor(
    method rememberRecentCheckpointId (line 1373) | private void rememberRecentCheckpointId(long id) {
    method dropSubsumedCheckpoints (line 1380) | private void dropSubsumedCheckpoints(long checkpointId) {
    method restoreLatestCheckpointedState (line 1411) | @Deprecated
    method restoreLatestCheckpointedStateToSubtasks (line 1448) | public OptionalLong restoreLatestCheckpointedStateToSubtasks(
    method restoreLatestCheckpointedStateToAll (line 1485) | public boolean restoreLatestCheckpointedStateToAll(
    method restoreInitialCheckpointIfPresent (line 1510) | public boolean restoreInitialCheckpointIfPresent(final Set<ExecutionJo...
    method restoreLatestCheckpointedStateInternal (line 1529) | private OptionalLong restoreLatestCheckpointedStateInternal(
    method restoreSavepoint (line 1646) | public boolean restoreSavepoint(
    method getNumberOfPendingCheckpoints (line 1692) | public int getNumberOfPendingCheckpoints() {
    method getNumberOfRetainedSuccessfulCheckpoints (line 1698) | public int getNumberOfRetainedSuccessfulCheckpoints() {
    method getPendingCheckpoints (line 1704) | public Map<Long, PendingCheckpoint> getPendingCheckpoints() {
    method getSuccessfulCheckpoints (line 1710) | public List<CompletedCheckpoint> getSuccessfulCheckpoints() throws Exc...
    method getCheckpointStorage (line 1716) | public CheckpointStorageCoordinatorView getCheckpointStorage() {
    method getCheckpointStore (line 1720) | public CompletedCheckpointStore getCheckpointStore() {
    method getCheckpointTimeout (line 1724) | public long getCheckpointTimeout() {
    method getTriggerRequestQueue (line 1729) | @Deprecated
    method isTriggering (line 1737) | public boolean isTriggering() {
    method isCurrentPeriodicTriggerAvailable (line 1741) | @VisibleForTesting
    method isPeriodicCheckpointingConfigured (line 1751) | public boolean isPeriodicCheckpointingConfigured() {
    method startCheckpointScheduler (line 1759) | public void startCheckpointScheduler() {
    method stopCheckpointScheduler (line 1776) | public void stopCheckpointScheduler() {
    method isPeriodicCheckpointingStarted (line 1790) | public boolean isPeriodicCheckpointingStarted() {
    method abortPendingCheckpoints (line 1799) | public void abortPendingCheckpoints(CheckpointException exception) {
    method abortPendingCheckpoints (line 1805) | private void abortPendingCheckpoints(
    method rescheduleTrigger (line 1822) | private void rescheduleTrigger(long tillNextMillis) {
    meth
Condensed preview — 728 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (5,246K chars).
[
  {
    "path": ".gitignore",
    "chars": 43,
    "preview": ".idea/*\nfire-parent.iml\n*.iml\ntarget/\n*.log"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 6866,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/accumulator.md",
    "chars": 3082,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/anno.md",
    "chars": 5984,
    "preview": "# Fire框架--基于注解简化Flink和Spark开发\n\n  从JDK5开始,Java提供了**注解**新特性,随后,注解如雨后春笋般被大量应用到各种开发框架中,其中,最具代表的是Spring。在注解出现以前,Spring的配置通常需要"
  },
  {
    "path": "docs/connector/adb.md",
    "chars": 900,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/clickhouse.md",
    "chars": 1518,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/hbase.md",
    "chars": 14678,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/hive.md",
    "chars": 4069,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/jdbc.md",
    "chars": 8773,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/kafka.md",
    "chars": 7645,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/oracle.md",
    "chars": 909,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/connector/rocketmq.md",
    "chars": 7137,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/datasource.md",
    "chars": 3690,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/dev/config.md",
    "chars": 5257,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/dev/deploy-script.md",
    "chars": 2218,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/dev/engine-env.md",
    "chars": 21080,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/dev/integration.md",
    "chars": 3340,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/feature.md",
    "chars": 767,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/highlight/checkpoint.md",
    "chars": 2681,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/highlight/spark-duration.md",
    "chars": 2285,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/index.md",
    "chars": 1698,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/platform.md",
    "chars": 765,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/pom/flink-pom.xml",
    "chars": 20281,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "docs/pom/spark-pom.xml",
    "chars": 20639,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "docs/properties.md",
    "chars": 24101,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/restful.md",
    "chars": 3045,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/schedule.md",
    "chars": 1946,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "docs/threadpool.md",
    "chars": 1895,
    "preview": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE "
  },
  {
    "path": "fire-common/pom.xml",
    "chars": 3281,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/Config.java",
    "chars": 1403,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/FieldName.java",
    "chars": 1696,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/FireConf.java",
    "chars": 1407,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/Internal.java",
    "chars": 1215,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/Rest.java",
    "chars": 1366,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/Scheduled.java",
    "chars": 1902,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/anno/TestStep.java",
    "chars": 1287,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/FireTask.java",
    "chars": 4565,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/analysis/ExceptionMsg.java",
    "chars": 3831,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/config/ConfigurationParam.java",
    "chars": 1734,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/lineage/Lineage.java",
    "chars": 1694,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLLineage.java",
    "chars": 2013,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTable.java",
    "chars": 4061,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTableColumns.java",
    "chars": 1955,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTablePartitions.java",
    "chars": 1982,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/lineage/SQLTableRelations.java",
    "chars": 2158,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/rest/ResultMsg.java",
    "chars": 3005,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/rest/yarn/App.java",
    "chars": 7345,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/ClassLoaderInfo.java",
    "chars": 2250,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/CpuInfo.java",
    "chars": 6336,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/DiskInfo.java",
    "chars": 5775,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/DisplayInfo.java",
    "chars": 1696,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/HardwareInfo.java",
    "chars": 3415,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/JvmInfo.java",
    "chars": 6449,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/MemoryInfo.java",
    "chars": 2791,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/NetworkInfo.java",
    "chars": 5085,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/OSInfo.java",
    "chars": 3326,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/RuntimeInfo.java",
    "chars": 3362,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/ThreadInfo.java",
    "chars": 2710,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/bean/runtime/UsbInfo.java",
    "chars": 2637,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/ConfigureLevel.java",
    "chars": 1098,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/Datasource.java",
    "chars": 1661,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/ErrorCode.java",
    "chars": 1020,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/JdbcDriver.java",
    "chars": 1586,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/JobType.java",
    "chars": 1791,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/Operation.java",
    "chars": 1144,
    "preview": "package com.zto.fire.common.enu;\n\nimport org.apache.commons.lang3.StringUtils;\n\n/**\n * SQL的操作类型\n *\n * @author ChengLong "
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/RequestMethod.scala",
    "chars": 1090,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/ThreadPoolType.java",
    "chars": 978,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/enu/YarnState.java",
    "chars": 2039,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/exception/FireException.java",
    "chars": 1198,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/exception/FireFlinkException.java",
    "chars": 1226,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/exception/FireSparkException.java",
    "chars": 1226,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/EncryptUtils.java",
    "chars": 4036,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/FileUtils.java",
    "chars": 2166,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/FindClassUtils.java",
    "chars": 7934,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/HttpClientUtils.java",
    "chars": 7356,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/IOUtils.java",
    "chars": 2016,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/MathUtils.java",
    "chars": 1710,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/OSUtils.java",
    "chars": 5001,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/ProcessUtil.java",
    "chars": 2528,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/ReflectionUtils.java",
    "chars": 14647,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/StringsUtils.java",
    "chars": 8161,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/UnitFormatUtils.java",
    "chars": 7384,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/java/com/zto/fire/common/util/YarnUtils.java",
    "chars": 1511,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/resources/log4j.properties",
    "chars": 1508,
    "preview": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/bean/TableIdentifier.scala",
    "chars": 1662,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FireConf.scala",
    "chars": 1341,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FireFrameworkConf.scala",
    "chars": 15360,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FireHDFSConf.scala",
    "chars": 1396,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FireHiveConf.scala",
    "chars": 2981,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FireKafkaConf.scala",
    "chars": 5535,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FirePS1Conf.scala",
    "chars": 1640,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/FireRocketMQConf.scala",
    "chars": 4072,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/conf/KeyNum.scala",
    "chars": 1535,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/ext/JavaExt.scala",
    "chars": 1423,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/ext/ScalaExt.scala",
    "chars": 2830,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/package.scala",
    "chars": 965,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/ConfigurationCenterManager.scala",
    "chars": 3386,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/DateFormatUtils.scala",
    "chars": 21942,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/ExceptionBus.scala",
    "chars": 3807,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/FireFunctions.scala",
    "chars": 5842,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/FireUtils.scala",
    "chars": 4346,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/JSONUtils.scala",
    "chars": 5329,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/JavaTypeMap.scala",
    "chars": 1987,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/KafkaUtils.scala",
    "chars": 7148,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/LineageManager.scala",
    "chars": 14138,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/LogUtils.scala",
    "chars": 2747,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/Logging.scala",
    "chars": 2409,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/MQProducer.scala",
    "chars": 6934,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/NumberFormatUtils.scala",
    "chars": 2739,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/PropUtils.scala",
    "chars": 18582,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/RegularUtils.scala",
    "chars": 799,
    "preview": "package com.zto.fire.common.util\n\n/**\n * 常用的正则表达式\n *\n * @author ChengLong 2021-5-28 11:14:19\n * @since fire 2.0.0\n */\nob"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/SQLLineageManager.scala",
    "chars": 5286,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/SQLUtils.scala",
    "chars": 2160,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/ScalaUtils.scala",
    "chars": 1485,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/ShutdownHookManager.scala",
    "chars": 3261,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/ThreadUtils.scala",
    "chars": 6581,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/Tools.scala",
    "chars": 1230,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/main/scala/com/zto/fire/common/util/ValueUtils.scala",
    "chars": 3350,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/test/scala/com/zto/fire/common/util/RegularUtilsUnitTest.scala",
    "chars": 2114,
    "preview": "package com.zto.fire.common.util\n\nimport com.zto.fire.common.anno.TestStep\nimport org.junit.Test\n\nimport java.io.StringR"
  },
  {
    "path": "fire-common/src/test/scala/com/zto/fire/common/util/SQLUtilsTest.scala",
    "chars": 2709,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/test/scala/com/zto/fire/common/util/ShutdownHookManagerTest.scala",
    "chars": 1534,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-common/src/test/scala/com/zto/fire/common/util/ValueUtilsTest.scala",
    "chars": 2110,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/.gitignore",
    "chars": 382,
    "preview": "# use glob syntax.\nsyntax: glob\n*.ser\n*.class\n*~\n*.bak\n#*.off\n*.old\n*.lck\n*.txt\n\n# eclipse conf file\n.settings\n.classpat"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/pom.xml",
    "chars": 3944,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/java/com/zto/fire/hbase/anno/HConfig.java",
    "chars": 1430,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/HBaseConnector.scala",
    "chars": 38085,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/HBaseFunctions.scala",
    "chars": 10024,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/bean/HBaseBaseBean.java",
    "chars": 1522,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/bean/MultiVersionsBean.java",
    "chars": 3404,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/conf/FireHBaseConf.scala",
    "chars": 4456,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-hbase/src/main/scala/com/zto/fire/hbase/utils/HBaseUtils.scala",
    "chars": 1938,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/pom.xml",
    "chars": 2416,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/src/main/resources/driver.properties",
    "chars": 1761,
    "preview": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/src/main/scala/com/zto/fire/jdbc/JdbcConnector.scala",
    "chars": 12834,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/src/main/scala/com/zto/fire/jdbc/JdbcConnectorBridge.scala",
    "chars": 3358,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/src/main/scala/com/zto/fire/jdbc/JdbcFunctions.scala",
    "chars": 3492,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/src/main/scala/com/zto/fire/jdbc/conf/FireJdbcConf.scala",
    "chars": 4524,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/fire-jdbc/src/main/scala/com/zto/fire/jdbc/util/DBUtils.scala",
    "chars": 6931,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/base-connectors/pom.xml",
    "chars": 2933,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/pom.xml",
    "chars": 2659,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/ClickHouseDynamicTableFactory.java",
    "chars": 9847,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/ClickHouseDynamicTableSink.java",
    "chars": 5182,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/ClickHouseDynamicTableSource.java",
    "chars": 4764,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/catalog/ClickHouseCatalog.java",
    "chars": 24526,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/catalog/ClickHouseCatalogFactory.java",
    "chars": 4727,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/config/ClickHouseConfig.java",
    "chars": 2354,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/config/ClickHouseConfigOptions.java",
    "chars": 7051,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/AbstractClickHouseInputFormat.java",
    "chars": 12013,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/AbstractClickHouseOutputFormat.java",
    "chars": 9789,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseBatchInputFormat.java",
    "chars": 5243,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseBatchOutputFormat.java",
    "chars": 4536,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseShardInputFormat.java",
    "chars": 7538,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseShardOutputFormat.java",
    "chars": 7593,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/ClickHouseStatementFactory.java",
    "chars": 4666,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/common/DistributedEngineFullSchema.java",
    "chars": 2880,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/connection/ClickHouseConnectionProvider.java",
    "chars": 7977,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/converter/ClickHouseConverterUtils.java",
    "chars": 8723,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/converter/ClickHouseRowConverter.java",
    "chars": 10640,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/executor/ClickHouseBatchExecutor.java",
    "chars": 3668,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/executor/ClickHouseExecutor.java",
    "chars": 6882,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/executor/ClickHouseUpsertExecutor.java",
    "chars": 5857,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/options/ClickHouseConnectionOptions.java",
    "chars": 2104,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/options/ClickHouseDmlOptions.java",
    "chars": 5849,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/options/ClickHouseReadOptions.java",
    "chars": 4840,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/BalancedPartitioner.java",
    "chars": 611,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/ClickHousePartitioner.java",
    "chars": 876,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/HashPartitioner.java",
    "chars": 764,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/internal/partitioner/ShufflePartitioner.java",
    "chars": 605,
    "preview": "//\n// Source code recreated from a .class file by IntelliJ IDEA\n// (powered by FernFlower decompiler)\n//\n\npackage org.ap"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseBatchBetweenParametersProvider.java",
    "chars": 1755,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseBetweenParametersProvider.java",
    "chars": 2368,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseParametersProvider.java",
    "chars": 4121,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseShardBetweenParametersProvider.java",
    "chars": 3815,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/split/ClickHouseShardTableParametersProvider.java",
    "chars": 2565,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/ClickHouseTypeUtil.java",
    "chars": 5133,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/ClickHouseUtil.java",
    "chars": 6155,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/FilterPushDownHelper.java",
    "chars": 10351,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/java-flink-1.14/org/apache/flink/connector/clickhouse/util/SqlClause.java",
    "chars": 1750,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-clickhouse/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory",
    "chars": 921,
    "preview": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE f"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-es/pom.xml",
    "chars": 2166,
    "preview": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:sc"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/pom.xml",
    "chars": 3879,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Licensed to the Apache Software Foundation (ASF) under one or more\n  ~ c"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQConfig.java",
    "chars": 7569,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQSink.java",
    "chars": 8448,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQSinkWithTag.java",
    "chars": 8575,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQSource.java",
    "chars": 15634,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements.  See the NOTIC"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RocketMQUtils.java",
    "chars": 1447,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/RunningChecker.java",
    "chars": 1122,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/selector/DefaultTopicSelector.java",
    "chars": 1410,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/selector/SimpleTopicSelector.java",
    "chars": 2763,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/selector/TopicSelector.java",
    "chars": 1005,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/JsonSerializationSchema.java",
    "chars": 1864,
    "preview": "package org.apache.rocketmq.flink.common.serialization;\n\nimport org.apache.flink.api.common.serialization.SerializationS"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/KeyValueDeserializationSchema.java",
    "chars": 1111,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/KeyValueSerializationSchema.java",
    "chars": 1036,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/SimpleKeyValueDeserializationSchema.java",
    "chars": 2353,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/SimpleKeyValueSerializationSchema.java",
    "chars": 2226,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java/org/apache/rocketmq/flink/common/serialization/TagKeyValueSerializationSchema.java",
    "chars": 1287,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/RocketMQSourceWithTag.java",
    "chars": 16222,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements.  See the NOTIC"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/common/serialization/JsonDeserializationSchema.java",
    "chars": 2262,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/common/serialization/SimpleTagKeyValueDeserializationSchema.java",
    "chars": 1918,
    "preview": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NO"
  },
  {
    "path": "fire-connectors/flink-connectors/flink-rocketmq/src/main/java-flink-1.12/org/apache/rocketmq/flink/common/serialization/TagKeyValueDeserializationSchema.java",
    "chars": 1191,
    "preview": "/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOT"
  }
]

// ... and 528 more files (download for full content)

About this extraction

This page contains the full source code of the ZTO-Express/fire GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 728 files (4.6 MB), approximately 1.3M tokens, and a symbol index with 3524 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!